The Reynolds-averaged Navier–Stokes (RANS) equations remain a workhorse technology for simulating compressible fluid flows of practical interest. Due to model-form errors, however, RANS models can yield erroneous predictions that preclude their use on mission-critical problems. This work presents a data-driven turbulence modeling strategy aimed at improving RANS models for compressible fluid flows. The strategy outlined has three core aspects: (1) prediction for the discrepancy in the Reynolds stress tensor and turbulent heat flux via machine learning (ML), (2) estimating uncertainties in ML model outputs via out-of-distribution detection, and (3) multi-step training strategies to improve feature-response consistency. Results are presented across a range of cases publicly available on NASA’s turbulence modeling resource involving wall-bounded flows, jet flows, and hypersonic boundary layer flows with cold walls. We find that one ML turbulence model is able to provide consistent improvements for numerous quantities-of-interest across all cases.
The elemental equation governing heat transfer in aerodynamic flows is the internal energy equation. For a boundary layer flow, a double integration of the Reynolds-averaged form of this equation provides an expression of the wall heat flux in terms of the integrated effects, over the boundary layer, of various physical processes: turbulent dissipation, mean dissipation, turbulent heat flux, etc. Recently available direct numerical simulation data for a Mach 11 cold-wall turbulent boundary layer allows a comparison of the exact contributions of these terms in the energy equation to the wall heat flux with their counterparts modeled in the Reynolds-averaged Navier-Stokes (RANS) framework. Various approximations involved in RANS, both closure models as well as approximations involved in adapting incompressible RANS models to a compressible form, are assessed through examination of the internal energy balance. There are a number of potentially problematic assumptions and terms identified through this analysis. The effect of compressibility corrections of the dilatational dissipation type is explored, as is the role of the modeled turbulent dissipation, in the context of wall heat flux predictions. The results indicate several potential avenues for RANS model improvement for hypersonic cold-wall boundary-layer flows.
We develop methods that could be used to qualify a training dataset and a data-driven turbulence closure trained on it. By qualify, we mean identify the kind of turbulent physics that could be simulated by the data-driven closure. We limit ourselves to closures for the Reynolds-Averaged Navier Stokes (RANS) equations. We build on our previous work on assembling feature-spaces, clustering and characterizing Direct Numerical Simulation datasets that are typically pooled to constitute training datasets. In this paper, we develop an alternative way to assemble feature-spaces and thus check the correctness and completeness of our previous method. We then use the characterization of our training dataset to identify if a data-driven turbulence closure learned on it would generalize to an unseen flow configuration – an impinging jet in our case. Finally, we train a RANS closure architected as a neural network, and develop an explanation i.e., an interpretable approximation, using generalized linear mixed-effects models and check whether the explanation resembles a contemporary closure from turbulence modeling.
Machine-learned models, specifically neural networks, are increasingly used as “closures” or “constitutive models” in engineering simulators to represent fine-scale physical phenomena that are too computationally expensive to resolve explicitly. However, these neural net models of unresolved physical phenomena tend to fail unpredictably and are therefore not used in mission-critical simulations. In this report, we describe new methods to authenticate them, i.e., to determine the (physical) information content of their training datasets, qualify the scenarios where they may be used and to verify that the neural net, as trained, adhere to physics theory. We demonstrate these methods with neural net closure of turbulent phenomena used in Reynolds Averaged Navier-Stokes equations. We show the types of turbulent physics extant in our training datasets, and, using a test flow of an impinging jet, identify the exact locations where the neural network would be extrapolating i.e., where it would be used outside the feature-space where it was trained. Using Generalized Linear Mixed Models, we also generate explanations of the neural net (à la Local Interpretable Model agnostic Explanations) at prototypes placed in the training data and compare them with approximate analytical models from turbulence theory. Finally, we verify our findings by reproducing them using two different methods.
This paper explores unsupervised learning approaches for analysis and categorization of turbulent flow data. Single point statistics from several high-fidelity turbulent flow simulation data sets are classified using a Gaussian mixture model clustering algorithm. Candidate features are proposed, which include barycentric coordinates of the Reynolds stress anisotropy tensor, as well as scalar and angular invariants of the Reynolds stress and mean strain rate tensors. A feature selection algorithm is applied to the data in a sequential fashion, flow by flow, to identify a good feature set and an optimal number of clusters for each data set. The algorithm is first applied to Direct Numerical Simulation data for plane channel flow, and produces clusters that are consistent with turbulent flow theory and empirical results that divide the channel flow into a number of regions (viscous sub-layer, log layer, etc). Clusters are then identified for flow over a wavy-walled channel, flow over a bump in a channel, and flow past a square cylinder. Some clusters are closely identified with the anisotropy state of the turbulence, as indicated by the location within the barycentric map of the Reynolds stress tensor. Other clusters can be connected to physical phenomena, such as boundary layer separation and free shear layers. Exemplar points from the clusters, or prototypes, are then identified using a prototype selection method. These exemplars summarize the dataset by a factor of 10 to 1000. The clustering and prototype selection algorithms provide a foundation for physics-based, semi-automated classification of turbulent flow states and extraction of a subset of data points that can serve as the basis for the development of explainable machine-learned turbulence models.
The development of a next generation high-fidelity modeling code for wind plant applications is one of the central focus areas of the U.S. Department of Energy Atmosphere to Electrons (A2e) initiative. The code is based on a highly scalable framework, currently called Nalu-Wind. One key aspect of the model development is a coordinated formal validation program undertaken specifically to establish the predictive capability of Nalu-Wind for wind plant applications. The purpose of this document is to define the verification and validation (V&V) plan for the A2e high-fidelity modeling capability. It summarizes the V&V framework, identifies code capability users and use cases, describes model validation needs, and presents a timeline to meet those needs.
An experimental characterization of the flow environment for the Sandia Axisymmetric Transonic Hump is presented. This is an axisymmetric model with a circular hump tested at a transonic Mach number, similar to the classic Bachalo-Johnson configuration. The flow is turbulent approaching the hump and becomes locally supersonic at the apex. This leads to a shock-wave/boundary-layer interaction, an unsteady separation bubble, and flow reattachment downstream. The characterization focuses on the quantities required to set proper boundary conditions for computational efforts described in the companion paper, including: 1) stagnation and test section pressure and temperature; 2) turbulence intensity; and 3) tunnel wall boundary layer profiles. Model characterization upstream of the hump includes: 1) surface shear stress; and 2) boundary layer profiles. Note: Numerical values characterizing the experiment have been redacted from this version of the paper. Model geometry and boundary conditions will be withheld until the official start of the Validation Challenge, at which time a revised version of this paper will become available. Data surrounding the hump are considered final results and will be withheld until completion of the Validation Challenge.
Near-wall turbulence models in Large-Eddy Simulation (LES) typically approximate near-wall behavior using a solution to the mean flow equations. This approach inevitably leads to errors when the modeled flow does not satisfy the assumptions surrounding the use of a mean flow approximation for an unsteady boundary condition. Herein, modern machine learning (ML) techniques are utilized to implement a coordinate frame invariant model of the wall shear stress that is derived specifically for complex flows for which mean near-wall models are known to fail. The model operates on a set of scalar and vector invariants based on data taken from the first LES grid point off the wall. Neural networks were trained and validated on spatially filtered direct numerical simulation (DNS) data. The trained networks were then tested on data to which they were never previously exposed and comparisons of the accuracy of the networks’ predictions of wall-shear stress were made to both a standard mean wall model approach and to the true stress values taken from the DNS data. The ML approach showed considerable improvement in both the accuracy of individual shear stress predictions as well as produced a more accurate distribution of wall shear stress values than did the standard mean wall model. This result held both in regions where the standard mean approach typically performs satisfactorily as well as in regions where it is known to fail, and also in cases where the networks were trained and tested on data taken from the same flow type/region as well as when trained and tested on data from different respective flow topologies.
An implicit, low-dissipation, low-Mach, variable density control volume finite element formulation is used to explore foundational understanding of numerical accuracy for large-eddy simulation applications on hybrid meshes. Detailed simulation comparisons are made between low-order hexahedral, tetrahedral, pyramid, and wedge/prism topologies against a third-order, unstructured hexahedral topology. Using smooth analytical and manufactured low-Mach solutions, design-order convergence is established for the hexahedral, tetrahedral, pyramid, and wedge element topologies using a new open boundary condition based on energy-stable methodologies previously deployed within a finite-difference context. A wide range of simulations demonstrate that low-order hexahedral- and wedge-based element topologies behave nearly identically in both computed numerical errors and overall simulation timings. Moreover, low-order tetrahedral and pyramid element topologies also display nearly the same numerical characteristics. Although the superiority of the hexahedral-based topology is clearly demonstrated for trivial laminar, principally-aligned flows, e.g., a 1x2x10 channel flow with specified pressure drop, this advantage is reduced for non-aligned, turbulent flows including the Taylor–Green Vortex, turbulent plane channel flow (Reτ395), and buoyant flow past a heated cylinder. With the order of accuracy demonstrated for both homogenous and hybrid meshes, it is shown that solution verification for the selected complex flows can be established for all topology types. Although the number of elements in a mesh of like spacing comprised of tetrahedral, wedge, or pyramid elements increases as compared to the hexahedral counterpart, for wall-resolved large-eddy simulation, the increased assembly and residual evaluation computational time for non-hexahedral is offset by more efficient linear solver times. Lastly, most simulation results indicate that modest polynomial promotion provides a significant increase in solution accuracy.
Deep-water offshore sites are an untapped opportunity to bring large-scale offshore wind energy to coastal population centers. The primary challenge has been the projected high costs for floating offshore wind systems. This work presents a comprehensive investigation of a new opportunity for deep-water offshore wind using large-scale vertical axis wind turbines. Owing to inherent features of this technology, there is a potential transformational opportunity to address the major cost drivers for floating wind using vertical axis wind turbines. The focus of this report is to evaluate the technical potential for this new technology. The approach to evaluating this potential was to perform system design studies focused on improving the understanding of technical performance parameters while looking for cost reduction opportunities. VAWT design codes were developed in order to perform these design studies. To gain a better understanding of the design space for floating VAWT systems, a comprehensive design study of multiple rotor configuration options was carried out. Floating platforms and moorings were then sized and evaluated for each of the candidate rotor configurations. Preliminary LCOE estimates and LCOE ranges were produced based on the design study results for each of the major turbine and system components. The major outcomes of this study are a comprehensive technology assessment of VAWT performance and preliminary LCOE estimates that demonstrate that floating VAWTs may have favorable performance and costs in comparison to conventional HAWTs in the deep-water offshore environment where floating systems are required, indicating that this new technology warrants further study.
Wind applications require the ability to simulate rotating blades. To support this use-case, a novel design-order sliding mesh algorithm has been developed and deployed. The hybrid method combines the control volume finite element methodology (CVFEM) with concepts found within a discontinuous Galerkin (DG) finite element method (FEM) to manage a sliding mesh. The method has been demonstrated to be design-order for the tested polynomial basis (P=1 and P=2) and has been deployed to provide production simulation capability for a Vestas V27 (225 kW) wind turbine. Other stationary and canonical rotating ow simulations are also presented. As the majority of wind-energy applications are driving extensive usage of hybrid meshes, a foundational study that outlines near-wall numerical behavior for a variety of element topologies is presented. Results indicate that the proposed nonlinear stabilization operator (NSO) is an effective stabilization methodology to control Gibbs phenomena at large cell Peclet numbers. The study also provides practical mesh resolution guidelines for future analysis efforts. Application-driven performance and algorithmic improvements have been carried out to increase robustness of the scheme on hybrid production wind energy meshes. Specifically, the Kokkos-based Nalu Kernel construct outlined in the FY17/Q4 ExaWind milestone has been transitioned to the hybrid mesh regime. This code base is exercised within a full V27 production run. Simulation timings for parallel search and custom ghosting are presented. As the low-Mach application space requires implicit matrix solves, the cost of matrix reinitialization has been evaluated on a variety of production meshes. Results indicate that at low element counts, i.e., fewer than 100 million elements, matrix graph initialization and preconditioner setup times are small. However, as mesh sizes increase, e.g., 500 million elements, simulation time associated with \setup-up" costs can increase to nearly 50% of overall simulation time when using the full Tpetra solver stack and nearly 35% when using a mixed Tpetra- Hypre-based solver stack. The report also highlights the project achievement of surpassing the 1 billion element mesh scale for a production V27 hybrid mesh. A detailed timing breakdown is presented that again suggests work to be done in the setup events associated with the linear system. In order to mitigate these initialization costs, several application paths have been explored, all of which are designed to reduce the frequency of matrix reinitialization. Methods such as removing Jacobian entries on the dynamic matrix columns (in concert with increased inner equation iterations), and lagging of Jacobian entries have reduced setup times at the cost of numerical stability. Artificially increasing, or bloating, the matrix stencil to ensure that full Jacobians are included is developed with results suggesting that this methodology is useful in decreasing reinitialization events without loss of matrix contributions. With the above foundational advances in computational capability, the project is well positioned to begin scientific inquiry on a variety of wind-farm physics such as turbine/turbine wake interactions.
Wind applications require the ability to simulate rotating blades. To support this use-case, a novel design-order sliding mesh algorithm has been developed and deployed. The hybrid method combines the control volume finite element methodology (CVFEM) with concepts found within a discontinuous Galerkin (DG) finite element method (FEM) to manage a sliding mesh. The method has been demonstrated to be design-order for the tested polynomial basis (P=1 and P=2) and has been deployed to provide production simulation capability for a Vestas V27 (225 kW) wind turbine. Other stationary and canonical rotating flow simulations are also presented. As the majority of wind-energy applications are driving extensive usage of hybrid meshes, a foundational study that outlines near-wall numerical behavior for a variety of element topologies is presented. Results indicate that the proposed nonlinear stabilization operator (NSO) is an effective stabilization methodology to control Gibbs phenomena at large cell Peclet numbers. The study also provides practical mesh resolution guidelines for future analysis efforts. Application-driven performance and algorithmic improvements have been carried out to increase robustness of the scheme on hybrid production wind energy meshes. Specifically, the Kokkos-based Nalu Kernel construct outlined in the FY17/Q4 ExaWind milestone has been transitioned to the hybrid mesh regime. This code base is exercised within a full V27 production run. Simulation timings for parallel search and custom ghosting are presented. As the low-Mach application space requires implicit matrix solves, the cost of matrix reinitialization has been evaluated on a variety of production meshes. Results indicate that at low element counts, i.e., fewer than 100 million elements, matrix graph initialization and preconditioner setup times are small. However, as mesh sizes increase, e.g., 500 million elements, simulation time associated with "setup-up" costs can increase to nearly 50% of overall simulation time when using the full Tpetra solver stack and nearly 35% when using a mixed Tpetra- Hypre-based solver stack. The report also highlights the project achievement of surpassing the 1 billion element mesh scale for a production V27 hybrid mesh. A detailed timing breakdown is presented that again suggests work to be done in the setup events associated with the linear system. In order to mitigate these initialization costs, several application paths have been explored, all of which are designed to reduce the frequency of matrix reinitialization. Methods such as removing Jacobian entries on the dynamic matrix columns (in concert with increased inner equation iterations), and lagging of Jacobian entries have reduced setup times at the cost of numerical stability. Artificially increasing, or bloating, the matrix stencil to ensure that full Jacobians are included is developed with results suggesting that this methodology is useful in decreasing reinitialization events without loss of matrix contributions. With the above foundational advances in computational capability, the project is well positioned to begin scientific inquiry on a variety of wind-farm physics such as turbine/turbine wake interactions.
This report summarizes fiscal year (FY) 2017 progress towards developing and implementing within the SPARC in-house finite volume flow solver advanced fluid reduced order models (ROMs) for compressible captive-carriage flow problems of interest to Sandia National Laboratories for the design and qualification of nuclear weapons components. The proposed projection-based model order reduction (MOR) approach, known as the Proper Orthogonal Decomposition (POD)/Least- Squares Petrov-Galerkin (LSPG) method, can substantially reduce the CPU-time requirement for these simulations, thereby enabling advanced analyses such as uncertainty quantification and de- sign optimization. Following a description of the project objectives and FY17 targets, we overview briefly the POD/LSPG approach to model reduction implemented within SPARC . We then study the viability of these ROMs for long-time predictive simulations in the context of a two-dimensional viscous laminar cavity problem, and describe some FY17 enhancements to the proposed model reduction methodology that led to ROMs with improved predictive capabilities. Also described in this report are some FY17 efforts pursued in parallel to the primary objective of determining whether the ROMs in SPARC are viable for the targeted application. These include the implemen- tation and verification of some higher-order finite volume discretization methods within SPARC (towards using the code to study the viability of ROMs on three-dimensional cavity problems) and a novel structure-preserving constrained POD/LSPG formulation that can improve the accuracy of projection-based reduced order models. We conclude the report by summarizing the key takeaways from our FY17 findings, and providing some perspectives for future work.
This report documents work performed using ALCC computing resources granted under a proposal submitted in February 2016, with the resource allocation period spanning the period July 2016 through June 2017. The award allocation was 10.7 million processor-hours at the National Energy Research Scientific Computing Center. The simulations performed were in support of two projects: the Atmosphere to Electrons (A2e) project, supported by the DOE EERE office; and the Exascale Computing Project (ECP), supported by the DOE Office of Science. The project team for both efforts consists of staff scientists and postdocs from Sandia National Laboratories and the National Renewable Energy Laboratory. At the heart of these projects is the open-source computational-fluid-dynamics (CFD) code, Nalu. Nalu solves the low-Mach-number Navier-Stokes equations using an unstructured- grid discretization. Nalu leverages the open-source Trilinos solver library and the Sierra Toolkit (STK) for parallelization and I/O. This report documents baseline computational performance of the Nalu code on problems of direct relevance to the wind plant physics application - namely, Large Eddy Simulation (LES) of an atmospheric boundary layer (ABL) flow and wall-modeled LES of a flow past a static wind turbine rotor blade. Parallel performance of Nalu and its constituent solver routines residing in the Trilinos library has been assessed previously under various campaigns. However, both Nalu and Trilinos have been, and remain, in active development and resources have not been available previously to rigorously track code performance over time. With the initiation of the ECP, it is important to establish and document baseline code performance on the problems of interest. This will allow the project team to identify and target any deficiencies in performance, as well as highlight any performance bottlenecks as we exercise the code on a greater variety of platforms and at larger scales. The current study is rather modest in scale, examining performance on problem sizes of O(100 million) elements and core counts up to 8k cores. This will be expanded as more computational resources become available to the projects.
When faced with a restrictive evaluation budget that is typical of today's highfidelity simulation models, the effective exploitation of lower-fidelity alternatives within the uncertainty quantification (UQ) process becomes critically important. Herein, we explore the use of multifidelity modeling within UQ, for which we rigorously combine information from multiple simulation-based models within a hierarchy of fidelity, in seeking accurate high-fidelity statistics at lower computational cost. Motivated by correction functions that enable the provable convergence of a multifidelity optimization approach to an optimal high-fidelity point solution, we extend these ideas to discrepancy modeling within a stochastic domain and seek convergence of a multifidelity uncertainty quantification process to globally integrated high-fidelity statistics. For constructing stochastic models of both the low-fidelity model and the model discrepancy, we employ stochastic expansion methods (non-intrusive polynomial chaos and stochastic collocation) computed by integration/interpolation on structured sparse grids or regularized regression on unstructured grids. We seek to employ a coarsely resolved grid for the discrepancy in combination with a more finely resolved Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. Grid for the low-fidelity model. The resolutions of these grids may be defined statically or determined through uniform and adaptive refinement processes. Adaptive refinement is particularly attractive, as it has the ability to preferentially target stochastic regions where the model discrepancy becomes more complex, i.e., where the predictive capabilities of the low-fidelity model start to break down and greater reliance on the high-fidelity model (via the discrepancy) is necessary. These adaptive refinement processes can either be performed separately for the different grids or within a coordinated multifidelity algorithm. In particular, we present an adaptive greedy multifidelity approach in which we extend the generalized sparse grid concept to consider candidate index set refinements drawn from multiple sparse grids, as governed by induced changes in the statistical quantities of interest and normalized by relative computational cost. Through a series of numerical experiments using statically defined sparse grids, adaptive multifidelity sparse grids, and multifidelity compressed sensing, we demonstrate that the multifidelity UQ process converges more rapidly than a single-fidelity UQ in cases where the variance of the discrepancy is reduced relative to the variance of the high-fidelity model (resulting in reductions in initial stochastic error), where the spectrum of the expansion coefficients of the model discrepancy decays more rapidly than that of the high-fidelity model (resulting in accelerated convergence rates), and/or where the discrepancy is more sparse than the high-fidelity model (requiring the recovery of fewer significant terms).
Fluctuating boundary layer wall shear stress can be an important loading component for structures subjected to turbulent boundary layer flows. While normal force loading via wall pressure fluctuation is relatively well described analytically, there is a dearth of information for wall shear behavior. Starting with an approximate acoustic analogy we derive simple approximate expressions for both wall pressure and wall shear fluctuations behavior utilizing a Taylor hypothesis based analogy between streamwise and temporal fluctuations. Analytical results include longitudinal spatial correlation, autocorrelation, frequency spectrum, RMS intensity and longitudinal and lateral coherence expressions. While coefficients in these expressions usually require some empirical input they nonetheless provide useful predictions for functional behavior. Comparison of the models with available literature data sets suggests reasonable agreement. Dedicated high fidelity numerical computations (direct numerical simulations) for a supersonic boundary layer are used to further explore the efficacy of these models. The analytical models for wall pressure fluctuation and wall shear fluctuation spectral density compare well for low frequency with the simulations when Reynolds number effects are included in the pressure fluctuation intensity. The approximate analytical models developed here provide a physics-based connection between classical empirical expressions and more complete experimental and computational descriptions.
In many aerospace applications, it is critical to be able to model fluid-structure interactions. In particular, correctly predicting the power spectral density of pressure fluctuations at surfaces can be important for assessing potential resonances and failure modes. Current turbulence modeling methods, such as wall-modeled Large Eddy Simulation and Detached Eddy Simulation, cannot reliably predict these pressure fluctuations for many applications of interest. The focus of this paper is on efforts to use data-driven machine learning methods to learn correction terms for the wall pressure fluctuation spectrum. In particular, the non-locality of the wall pressure fluctuations in a compressible boundary layer is investigated using random forests and neural networks trained and evaluated on Direct Numerical Simulation data.
We investigate a novel application of deep neural networks to modeling of errors in prediction of surface pressure fluctuations beneath a compressible, turbulent flow. In this context, the truth solution is given by Direct Numerical Simulation (DNS) data, while the predictive model is a wall-modeled Large Eddy Simulation (LES). The neural network provides a means to map relevant statistical flow-features within the LES solution to errors in prediction of wall pressure spectra. We simulate a number of flat plate turbulent boundary layers using both DNS and wall-modeled LES to build up a database with which to train the neural network. We then apply machine learning techniques to develop an optimized neural network model for the error in terms of relevant flow features.
Least-squares Petrov–Galerkin (LSPG) model-reduction techniques such as the Gauss–Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible flow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform optimal projection associated with residual minimization at the time-continuous level, while LSPG techniques do so at the time-discrete level. Here, this work provides a detailed theoretical and computational comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge–Kutta schemes. We present a number of new findings, including conditions under which the LSPG ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and computationally that decreasing the time step does not necessarily decrease the error for the LSPG ROM; instead, the time step should be ‘matched’ to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible-flow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the LSPG reduced-order model by an order of magnitude.
Recent field experiments conducted in the near wake (up to 0.5 rotor diameters downwind of the rotor) of a Clipper Liberty C96 2.5 MW wind turbine using snow-based super-large-scale particle image velocimetry (SLPIV) (Hong et al., Nat. Commun., vol. 5, 2014, 4216) were successful in visualizing tip vortex cores as areas devoid of snowflakes. The so-visualized snow voids, however, suggested tip vortex cores of complex shape consisting of circular cores with distinct elongated comet-like tails. We employ large-eddy simulation (LES) to elucidate the structure and dynamics of the complex tip vortices identified experimentally. We show that the LES, with inflow conditions representing as closely as possible the state of the flow approaching the turbine when the SLPIV experiments were carried out, reproduce vortex cores in good qualitative agreement with the SLPIV results, essentially capturing all vortex core patterns observed in the field in the tip shear layer. The computed results show that the visualized vortex patterns are formed by the tip vortices and a second set of counter-rotating spiral vortices intertwined with the tip vortices. To probe the dependence of these newly uncovered coherent flow structures on turbine design, size and approach flow conditions, we carry out LES for three additional turbines: (i) the Scaled Wind Farm Technology (SWiFT) turbine developed by Sandia National Laboratories in Lubbock, TX, USA; (ii) the wind turbine developed for the European collaborative Mexico (Model Experiments in Controlled Conditions) project; and (iii) the model turbine presented in the paper by Lignarolo et al. (J. Fluid Mech., vol. 781, 2015, pp. 467-493), and the Clipper turbine under varying inflow turbulence conditions. We show that similar counter-rotating vortex structures as those observed for the Clipper turbine are also observed for the SWiFT, Mexico and model wind turbines. However, the strength of the counter-rotating vortices relative to that of the tip vortices from the model turbine is significantly weaker. We also show that incoming flows with low level turbulence attenuate the elongation of the tip and counter-rotating vortices. Sufficiently high turbulence levels in the incoming flow, on the other hand, tend to break up the coherence of spiral vortices in the near wake. To elucidate the physical mechanism that gives rise to such rich coherent dynamics we examine the stability of the turbine tip shear layer using the theory proposed by Leibovich & Stewartson (J. Fluid Mech., vol. 126, 1983, pp. 335-356). We show that for all simulated cases the theory consistently indicates the flow to be unstable exactly in the region where counter-rotating spirals emerge. We thus postulate that centrifugal instability of the rotating turbine tip shear layer is a possible mechanism for explaining the phenomena we have uncovered herein.
On Thursday, August 25, 2016, the ATDM L2 milestone review panel met with the milestone team to conduct a final assessment of the completeness and quality of the work performed. First and foremost, the panel would like to congratulate and commend the milestone team for a job well done. The team completed a significant body of high-quality work toward very ambitious goals. Additionally, their persistence in working through the technical challenges associated with evolving technology, the nontechnical challenges associated with integrating across multiple software development teams, and the many demands on their time speaks volumes about their commitment to delivering the best work possible to advance the ATDM program. The panel’s comments on the individual completion criteria appear in the last section of this memo.
This report summarizes FY16 progress towards enabling uncertainty quantification for compressible cavity simulations using model order reduction (MOR). The targeted application is the quantification of the captive-carry environment for the design and qualification of nuclear weapons systems. To accurately simulate this scenario, Large Eddy Simulations (LES) require very fine meshes and long run times, which lead to week-long runs even on parallel state-of-the-art super- computers. MOR can reduce substantially the CPU-time requirement for these simulations. We describe two approaches for model order reduction for nonlinear systems, which can yield significant speed-ups when combined with hyper-reduction: the Proper Orthogonal Decomposition (POD)/Galerkin approach and the POD/Least-Squares Petrov Galerkin (LSPG) approach. The implementation of these methods within the in-house compressible flow solver SPARC is discussed. Next, a method for stabilizing and enhancing low-dimensional reduced bases that was developed as a part of this project is detailed. This approach is based on a premise termed "minimal subspace rotation", and has the advantage of yielding ROMs that are more stable and accurate for long-time compressible cavity simulations. Numerical results for some laminar cavity problems aimed at gauging the viability of the proposed model reduction methodologies are presented and discussed.
Simulations of the flow past a rectangular cavity containing a model captive store are performed using a hybrid Reynolds-averaged Navier–Stokes/large-eddy simulation model. Calculated pressure fluctuation spectra are validated using measurements made on the same configuration in a trisonic wind tunnel at Mach numbers of 0.60, 0.80, and 1.47. The simulation results are used to calculate unsteady integrated forces and moments acting on the store. Spectra of the forces and moments, along with correlations calculated for force/moment pairs, reveal that a complex relationship exists between the unsteady integrated forces and the measured resonant cavity modes, as indicated in the cavity wall pressure measurements. The structure of identified cavity resonant tones is examined by visualization of filtered surface pressure fields.
Atmosphere to electrons (A2e) is a multi-year U.S. Department of Energy (DOE) research initiative targeting significant reductions in the cost of wind energy through an improved understanding of the complex physics governing wind flow into and through whole wind farms. Better insight into the flow physics of large multi-turbine arrays will address the plant-level energy losses, is likely to reduce annual operational costs by hundreds of millions of dollars, and will improve project financing terms to more closely resemble traditional capital projects. In support of this initiative, two planning meetings were convened, bringing together professionals from universities, national laboratories, and industry to discuss wind plant modeling challenges, requirements, best practices, and priorities. This report documents the combined work of the two meetings and serves as a key part of the foundation for the A2e/HFM effort for predictive modeling of whole wind plant physics.
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes. We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.
This work examines simulation requirements for ensuring accurate predictions of compressible cavity flows. Lessons learned from this study will be used in the future to study the effects of complex geometric features, representative of those found on real weapons bays, on compressible flow past open cavities. A hybrid RANS/LES simulation method is applied to a rectangular cavity with length-to-depth ratio of 7, in order to first validate the model for this class of flows. Detailed studies of mesh resolution, absorbing boundary condition formulation, and boundary zone extent are included and guidelines are developed for ensuring accurate prediction of cavity pressure fluctuations.
This work examines simulation requirements for ensuring accurate predictions of compressible cavity flows. Lessons learned from this study will be used in the future to study the effects of complex geometric features, representative of those found on real weapons bays, on compressible flow past open cavities. A hybrid RANS/LES simulation method is applied to a rectangular cavity with length-to-depth ratio of 7, in order to first validate the model for this class of flows. Detailed studies of mesh resolution, absorbing boundary condition formulation, and boundary zone extent are included and guidelines are developed for ensuring accurate prediction of cavity pressure fluctuations.
An approach for building energy-stable Galerkin reduced order models (ROMs) for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. This method is an extension of earlier work by the authors specific to the equations of linearized compressible inviscid flow. The key idea is to apply to the PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. For linear problems, the desired transformation is induced by a special inner product, termed the "symmetry inner product", which is derived herein for several systems of physical interest. Connections are established between the proposed approach and other stability-preserving model reduction methods, giving the paper a review flavor. More specifically, it is shown that a discrete counterpart of this inner product is a weighted L2 inner product obtained by solving a Lyapunov equation, first proposed by Rowley et al. and termed herein the "Lyapunov inner product". Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.
This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier-Stokes equations is derived, and it is demonstrated that if a Galerkin ROM is constructed in this inner product, the ROM system energy will be bounded in a way that is consistent with the behavior of the exact solution to these PDEs, i.e., the ROM will be energy-stable. The viability of the linear as well as nonlinear continuous projection model reduction approaches developed as a part of this project is evaluated on several test cases, including the cavity configuration of interest in the targeted application area. In the second part of this report, some POD/Galerkin approaches for building stable ROMs using discrete projection are explored. It is shown that, for generic linear time-invariant (LTI) systems, a discrete counterpart of the continuous symmetry inner product is a weighted L2 inner product obtained by solving a Lyapunov equation. This inner product was first proposed by Rowley et al., and is termed herein the “Lyapunov inner product“. Comparisons between the symmetry inner product and the Lyapunov inner product are made, and the performance of ROMs constructed using these inner products is evaluated on several benchmark test cases. Also in the second part of this report, a new ROM stabilization approach, termed “ROM stabilization via optimization-based eigenvalue reassignment“, is developed for generic LTI systems. At the heart of this method is a constrained nonlinear least-squares optimization problem that is formulated and solved numerically to ensure accuracy of the stabilized ROM. Numerical studies reveal that the optimization problem is computationally inexpensive to solve, and that the new stabilization approach delivers ROMs that are stable as well as accurate. Summaries of “lessons learned“ and perspectives for future work motivated by this LDRD project are provided at the end of each of the two main chapters.
An extensive database of simulated loads representing almost 100 years of operation of a utility-scale wind turbine has been developed using high-performance computing resources. Such a large amount of data makes it possible to evaluate several proposals being considered in planned revisions of industry guidelines such as the International Electrotechnical Commission's 61400-1 wind turbine design standard. Current design provisions, especially those dependent on large amounts of data, can be critically examined and validated or alternative proposals can be made based on studies using this loads database. We discuss one design load case in particular that requires nominal 50-year loads, often difficult to establish with limited simulations followed by statistical extrapolation, to which a load factor (1.25) is applied. Alternatives that use other load statistics easier to establish from simulations are systematically evaluated. Such robust load statistics are associated with lower levels of uncertainty. Load factors to be applied to such alternative nominal loads are higher than those for the 50-year load. We discuss how the loads database developed enabled systematic study of a proposal that can serve as an alternative to use of a factored 50-year load. Calibration of this proposal accounts for the uncertainty in estimation of loads from simulation and the large database allows assessment against 50-year loads with quantifiable (and low) uncertainty.
This report documents the data post-processing and analysis performed to date on the field test data. Results include the control capability of the trailing edge flaps, the combined structural and aerodynamic damping observed through application of step actuation with ensemble averaging, direct observation of time delays associated with aerodynamic response, and techniques for characterizing an operating turbine with active rotor control.
Simulations of a rectangular cavity containing a model captive store are performed using a Hybrid Reynolds-averaged Navier-Stokes/Large Eddy Simulation (RANS/LES) model. The fluid flow simulations are coupled to a structural dynamics finite element model using a one-way pressure transfer procedure. Simulation results for pressure fluctuation spectra and store acceleration are compared to measurements made on the same configuration in a tri-sonic wind tunnel at Mach numbers of 0.60, 0.80, and 1.47. The simulation results are used to calculate unsteady integrated forces and moments acting on the store. Spectra of the forces and moments reveal that a complex relationship exists between the unsteady integrated forces and the measured resonant cavity modes as indicated in the cavity wall pressure measurements. Predictions of the store accelerations from the coupled model show some success in predicting both forced and natural modal responses of the store within the cavity environment, while also highlighting some challenges in obtaining statistically converged results for this class of problems.
A newly-developed computational fluid-structure interaction framework for simulation of stores in captive carriage environments is validated. The computational method involves one-way coupling, with pressure loads calculated by a hybrid RANS-LES CFD model transferred to a structural dynamics solver. Validation is performed at several levels. First, the ability of the CFD model to accurately predict the flow-field and resulting aerodynamic loads in an empty cavity is assessed against wind tunnel data. In parallel, the structural dynamics model for a simulated store is calibrated and then validated against a shaker table experiment. Finally, predictions of aerodynamic loads and store vibrations from the coupled simulation model are compared to new wind tunnel experimental data for a model captive carriage configuration.