Grid Tailored Reduced-Order Models for Steady Hypersonic Aerodynamics
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
AIAA Journal
High-speed aerospace engineering applications rely heavily on computational fluid dynamics (CFD) models for design and analysis. This reliance on CFD models necessitates performing accurate and reliable uncertainty quantification (UQ) of the CFD models, which can be very expensive for hypersonic flows. Additionally, UQ approaches are many-query problems requiring many runs with a wide range of input parameters. One way to enable computationally expensive models to be used in such many-query problems is to employ projection-based reduced-order models (ROMs) in lieu of the (high-fidelity) full-order model (FOM). In particular, the least-squares Petrov–Galerkin (LSPG) ROM (equipped with hyper-reduction) has demonstrated the ability to significantly reduce simulation costs while retaining high levels of accuracy on a range of problems, including subsonic CFD applications. This allows LSPG ROM simulations to replace the FOM simulations in UQ studies, making UQ tractable even for large-scale CFD models. This work presents the first application of LSPG to a hypersonic CFD application, the Hypersonic International Flight Research Experimentation 1 (HIFiRE-1) in a three-dimensional, turbulent Mach 7.1 flow. This paper shows the ability of the ROM to significantly reduce computational costs while maintaining high levels of accuracy in computed quantities of interest.
This project combines several new concepts to create a boundary layer transition prediction capability that is suitable for analyzing modern hypersonic flight vehicles. The first new concept is the use of ''optimization'' methods to detect the hydrodynamic instabilities that cause boundary layer transition; the use of this method removes the need for many limiting assumptions of other methods and enables quantification of the interactions between boundary layer instabilities and the flow field imperfections that generate them. The second new concept is the execution of transition analysis within a conventional hypersonics CFD code, using the same mesh and numerical schemes for the transition analysis and the laminar flow simulation. This feature enables rapid execution of transition analysis with less user oversight required and no interpolation steps needed.
Abstract not provided.
AIAA Scitech 2020 Forum
High-speed aerospace engineering applications rely heavily on computational fluid dynamics (CFD) models for design and analysis due to the expense and difficulty of flight tests and experiments. This reliance on CFD models necessitates performing accurate and reliable uncertainty quantification (UQ) of the CFD models. However, it is very computationally expensive to run CFD for hypersonic flows due to the fine grid resolution required to capture the strong shocks and large gradients that are typically present. Additionally, UQ approaches are “many-query” problems requiring many runs with a wide range of input parameters. One way to enable computationally expensive models to be used in such many-query problems is to employ projection-based reduced-order models (ROMs) in lieu of the (high-fidelity) full-order model. In particular, the least-squares Petrov–Galerkin (LSPG) ROM (equipped with hyper-reduction) has demonstrated the ability to significantly reduce simulation costs while retaining high levels of accuracy on a range of problems including subsonic CFD applications [1, 2]. This allows computationally inexpensive LSPG ROM simulations to replace the full-order model simulations in UQ studies, which makes this many-query task tractable, even for large-scale CFD models. This work presents the first application of LSPG to a hypersonic CFD application. In particular, we present results for LSPG ROMs of the HIFiRE-1 in a three-dimensional, turbulent Mach 7.1 flow, showcasing the ability of the ROM to significantly reduce computational costs while maintaining high levels of accuracy in computed quantities of interest.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report documents the initial testing of the Sandia Parallel Aerodynamics and Reentry Code (SPARC) to directly simulate hypersonic, turbulent boundary layer flow over a sharp 7- degree half-angle cone. This type of computation involves a tremendously large range of scales both in time and space, requiring a large number of grid cells and the efficient utilization of a large pool of resources. The goal of the simulation is to mimic and verify a wind tunnel experiment that seeks to measure the turbulent surface pressure fluctuations. These data are necessary for building a model to predict random vibration loading in the reentry flight environment. A low-dissipation flux scheme in SPARC is used on a 2.7 billion cell mesh to capture the turbulent fluctuations in the boundary layer flow. The grid is divided into 115200 partitions and simulated using the Knight's Landings (KNL) partition of the Trinity system. The parallel performance of SPARC is explored on the Trinity system, as well as some of the other new architectures. Extracting data from the simulation shows good agreement with the experiment as well as a colleague's simulation. The data provide a guide for which a new model can be built for better prediction of the reentry random vibration loads.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
AIAA Aviation 2019 Forum
Near-wall turbulence models in Large-Eddy Simulation (LES) typically approximate near-wall behavior using a solution to the mean flow equations. This approach inevitably leads to errors when the modeled flow does not satisfy the assumptions surrounding the use of a mean flow approximation for an unsteady boundary condition. Herein, modern machine learning (ML) techniques are utilized to implement a coordinate frame invariant model of the wall shear stress that is derived specifically for complex flows for which mean near-wall models are known to fail. The model operates on a set of scalar and vector invariants based on data taken from the first LES grid point off the wall. Neural networks were trained and validated on spatially filtered direct numerical simulation (DNS) data. The trained networks were then tested on data to which they were never previously exposed and comparisons of the accuracy of the networks’ predictions of wall-shear stress were made to both a standard mean wall model approach and to the true stress values taken from the DNS data. The ML approach showed considerable improvement in both the accuracy of individual shear stress predictions as well as produced a more accurate distribution of wall shear stress values than did the standard mean wall model. This result held both in regions where the standard mean approach typically performs satisfactorily as well as in regions where it is known to fail, and also in cases where the networks were trained and tested on data taken from the same flow type/region as well as when trained and tested on data from different respective flow topologies.
Abstract not provided.
This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly the POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
AIAA SciTech Forum - 55th AIAA Aerospace Sciences Meeting
In many aerospace applications, it is critical to be able to model fluid-structure interactions. In particular, correctly predicting the power spectral density of pressure fluctuations at surfaces can be important for assessing potential resonances and failure modes. Current turbulence modeling methods, such as wall-modeled Large Eddy Simulation and Detached Eddy Simulation, cannot reliably predict these pressure fluctuations for many applications of interest. The focus of this paper is on efforts to use data-driven machine learning methods to learn correction terms for the wall pressure fluctuation spectrum. In particular, the non-locality of the wall pressure fluctuations in a compressible boundary layer is investigated using random forests and neural networks trained and evaluated on Direct Numerical Simulation data.
47th AIAA Fluid Dynamics Conference, 2017
We investigate a novel application of deep neural networks to modeling of errors in prediction of surface pressure fluctuations beneath a compressible, turbulent flow. In this context, the truth solution is given by Direct Numerical Simulation (DNS) data, while the predictive model is a wall-modeled Large Eddy Simulation (LES). The neural network provides a means to map relevant statistical flow-features within the LES solution to errors in prediction of wall pressure spectra. We simulate a number of flat plate turbulent boundary layers using both DNS and wall-modeled LES to build up a database with which to train the neural network. We then apply machine learning techniques to develop an optimized neural network model for the error in terms of relevant flow features.
Abstract not provided.