EVENT DETECTION IN MULTI-VARIATE SCIENTIFIC SIMULATIONS USING FEATURE ANOMALY METRICS
Abstract not provided.
Abstract not provided.
Journal of Propulsion and Power
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Turbomachinery
For film cooling of combustor linings and turbine blades, it is critical to be able to accurately model jets-in-crossflow. Current Reynolds-averaged Navier-Stokes (RANS) models often give unsatisfactory predictions in these flows, due in large part to model form error, which cannot be resolved through calibration or tuning of model coefficients. The Boussinesq hypothesis, upon which most two-equation RANS models rely, posits the existence of a non-negative scalar eddy viscosity, which gives a linear relation between the Reynolds stresses and the mean strain rate. This model is rigorously analyzed in the context of a jet-in-crossflow using the high-fidelity large eddy simulation data of Ruiz et al. (2015, "Flow Topologies and Turbulence Scales in a Jet-in-Cross-Flow," Phys. Fluids, 27(4), p. 045101), as well as RANS k-ε results for the same flow. It is shown that the RANS models fail to accurately represent the Reynolds stress anisotropy in the injection hole, along the wall, and on the lee side of the jet. Machine learning methods are developed to provide improved predictions of the Reynolds stress anisotropy in this flow.
Abstract not provided.
This work was conducted as part of a Harry S. Truman Fellowship Laboratory Directed Research and Development project. The goal was to use machine learning methods to provide uncertainty quantification and model improvements for Reynolds Averaged Navier Stokes (RANS) turbulence models. For applications of interest in energy, safety, and security, it is critical to be able to model turbulence accurately. Current RANS models are unreliable for many flows of engineering relevance. Machine learning provides an avenue for developing improved models based on the data generated by high fidelity simulations. In this project, machine learning methods were used to predict when current RANS models would fail. They were also used to develop improved RANS closure models. A key aim was developing a tight feedback loop between scientific domain knowledge and data driven methods. To this end, a methodology for incorporating known invariance constraints into the machine learning models was proposed and evaluated. This work demonstrated that incorporating known constraints into the data driven models provided improved performance and reduced computational cost. This research represents one of the first applications of deep learning to turbulence modeling.
Abstract not provided.
Abstract not provided.
8th AIAA Theoretical Fluid Mechanics Conference, 2017
The k-ε turbulence model has been described as perhaps “the most widely used complete turbulence model.” This family of heuristic Reynolds Averaged Navier-Stokes (RANS) turbulence closures is supported by a suite of model parameters that have been estimated by demanding the satisfaction of well-established canonical flows such as homogeneous shear flow, log-law behavior, etc. While this procedure does yield a set of so-called nominal parameters, it is abundantly clear that they do not provide a universally satisfactory turbulence model that is capable of simulating complex flows. Recent work on the Bayesian calibration of the k-ε model using jet-in-crossflow wind tunnel data has yielded parameter estimates that are far more predictive than nominal parameter values. Here we develop a self-similar asymptotic solution for axisymmetric jet-in-crossflow interactions and derive analytical estimates of the parameters that were inferred using Bayesian calibration. The self-similar method utilizes a near field approach to estimate the turbulence model parameters while retaining the classical far-field scaling to model flow field quantities. Our parameter values are seen to be far more predictive than the nominal values, as checked using RANS simulations and experimental measurements. They are also closer to the Bayesian estimates than the nominal parameters. A traditional simplified jet trajectory model is explicitly related to the turbulence model parameters and is shown to yield good agreement with measurement when utilizing the analytical derived turbulence model coefficients. The close agreement between the turbulence model coefficients obtained via Bayesian calibration and the analytically estimated coefficients derived in this paper is consistent with the contention that the Bayesian calibration approach is firmly rooted in the underlying physical description.
Proceedings of the ASME Turbo Expo
In film cooling flows, it is important to know the temperature distribution resulting from the interaction between a hot main flow and a cooler jet. However, current Reynolds-averaged Navier-Stokes (RANS) models yield poor temperature predictions. A novel approach for RANS modeling of the turbulent heat flux is proposed, in which the simple gradient diffusion hypothesis (GDH) is assumed and a machine learning algorithm is used to infer an improved turbulent diffusivity field. This approach is implemented using three distinct data sets: two are used to train the model and the third is used for validation. The results show that the proposed method produces significant improvement compared to the common RANS closure, especially in the prediction of film cooling effectiveness.
23rd AIAA Computational Fluid Dynamics Conference, 2017
For many aerospace applications, there exists significant demand for more accurate tur- bulence models. Data-driven machine learning algorithms have the capability to accurately predict when Reynolds Averaged Navier Stokes (RANS) models will have increased model form uncertainty due to the breakdown of underlying model assumptions. These machine learning models can be used to adaptively trigger relevant model corrections in the regions they are needed. This paper presents a framework for data-driven adaptive physics model- ing that leverages known RANS model corrections and proven machine learning methods. This adaptive physics modeling framework is evaluated for two case studies: fully developed turbulent square duct flow and flow over a wavy wall. It is demonstrated that implement- ing model corrections zonally based on machine learning classification of where underlying RANS model assumptions are violated can achieve the same accuracy as implementing those corrections globally.
47th AIAA Fluid Dynamics Conference, 2017
We investigate a novel application of deep neural networks to modeling of errors in prediction of surface pressure fluctuations beneath a compressible, turbulent flow. In this context, the truth solution is given by Direct Numerical Simulation (DNS) data, while the predictive model is a wall-modeled Large Eddy Simulation (LES). The neural network provides a means to map relevant statistical flow-features within the LES solution to errors in prediction of wall pressure spectra. We simulate a number of flat plate turbulent boundary layers using both DNS and wall-modeled LES to build up a database with which to train the neural network. We then apply machine learning techniques to develop an optimized neural network model for the error in terms of relevant flow features.
Proceedings of the ASME Turbo Expo
Classical RANS turbulence models have known deficiencies when applied to jets in crossflow. Identifying the linear Boussinesq stress-strain hypothesis as a major contribution to erroneous prediction, we consider and contrast two machine learning frameworks for turbulence model development. Gene Expression Programming, an evolutionary algorithm that employs a survival of the fittest analogy, and a Deep Neural Network, based on neurological processing, add non-linear terms to the stress-strain relationship. The results are Explicit Algebraic Stress Model-like closures. High fidelity data from an inline jet in crossflow study is used to regress new closures. These models are then tested on a skewed jet to ascertain their predictive efficacy. For both methodologies, a vast improvement over the linear relationship is observed.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Journal of Fluid Mechanics
There exists significant demand for improved Reynolds-Averaged Navier-Stokes (RANS) turbulence models that are informed by and can represent a richer set of turbulence physics. This paper presents a method of using deep neural networks to learn a model for the Reynolds stress anisotropy tensor from high-fidelity simulation data. A novel neural network architecture is proposed which uses a multiplicative layer with an invariant tensor basis to embed Galilean invariance into the predicted anisotropy tensor. It is demonstrated that this neural network architecture provides improved prediction accuracy compared with a generic neural network architecture that does not embed this invariance property. The Reynolds stress anisotropy predictions of this invariant neural network are propagated through to the velocity field for two test cases. For both test cases, significant improvement versus baseline RANS linear eddy viscosity and nonlinear eddy viscosity models is demonstrated.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Although increased availability of computational resources has enabled high-fidelity simulations (e.g., large eddy simulations) of turbulent flows, the Reynolds-Averaged Navier–Stokes (RANS) models are still the dominant tools in industrial applications. However, the predictive capabilities of RANS models are limited by large model-form discrepancies due to the Reynolds stress closure. Recently, a Physics-Informed Machine Learning (PIML) approach has been proposed to learn the functional form of Reynolds stress discrepancy in RANS simulations based on available data. It has been demonstrated that the learned discrepancy function can be used to improve Reynolds stresses in new flows where data are not available. However, due to a number of challenges, the improvements are only demonstrated in the Reynolds stress prediction but not in corresponding propagated quantities of interest (e.g., velocity field). In this work, we investigate the prediction performance on the velocity field by propagating the corrected Reynolds stresses in PIML approach. To enrich the input features, an integrity basis of invariants is implemented. The fully developed turbulent flow in a square duct is used as the test case. The discrepancy model is trained on flow fields from several Reynolds numbers and evaluated on a duct flow at a higher Reynolds number than any of the training cases. The predicted Reynolds stresses are propagated to velocity field via RANS equations. Numerical results show excellent predictive performances in both Reynolds stresses and their propagated velocities, demonstrating the merits of the PIML approach in predictive turbulence modeling.
Abstract not provided.
Abstract not provided.
Journal of Computational Physics
In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings - 2015 IEEE 14th International Conference on Machine Learning and Applications, ICMLA 2015
The question of how to accurately model turbulent flows is one of the most long-standing open problems in physics. Advances in high performance computing have enabled direct numerical simulations of increasingly complex flows. Nevertheless, for most flows of engineering relevance, the computational cost of these direct simulations is prohibitive, necessitating empirical model closures for the turbulent transport. These empirical models are prone to "model form uncertainty" when their underlying assumptions are violated. Understanding, quantifying, and mitigating this model form uncertainty has become a critical challenge in the turbulence modeling community. This paper will discuss strategies for using machine learning to understand the root causes of the model form error and to develop model corrections to mitigate this error. Rule extraction techniques are used to derive simple rules for when a critical model assumption is violated. The physical intuition gained from these simple rules is then used to construct a linear correction term for the turbulence model which shows improvement over naive linear fits.
Abstract not provided.
Proceedings of the ASME Turbo Expo
For film cooling of combustor linings and turbine blades, it is critical to be able to accurately model jets-in-crossflow. Current Reynolds Averaged Navier Stokes (RANS) models often give unsatisfactory predictions in these flows, due in large part to model form error, which cannot be resolved through calibration or tuning of model coefficients. The Boussinesq hypothesis, upon which most two-equation RANS models rely, posits the existence of a non-negative scalar eddy viscosity, which gives a linear relation between the Reynolds stresses and the mean strain rate. This model is rigorously analyzed in the context of a jet-in-crossflow using the high fidelity Large Eddy Simulation data of Ruiz et al. (2015), as well as RANS k-e results for the same flow. It is shown that the RANS models fail to accurately represent the Reynolds stress anisotropy in the injection hole, along the wall, and on the lee side of the jet. Machine learning methods are developed to provide improved predictions of the Reynolds stress anisotropy in this flow.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The objective of this work is to investigate the efficacy of using calibration strategies from Uncertainty Quantification (UQ) to determine model coefficients for LES. As the target methods are for engineering LES, uncertainty from numerical aspects of the model must also be quantified. 15 The ultimate goal of this research thread is to generate a cost versus accuracy curve for LES such that the cost could be minimized given an accuracy prescribed by an engineering need. Realization of this goal would enable LES to serve as a predictive simulation tool within the engineering design process.
Physics of Fluids
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. Feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.