Overview of Ablation Research at Sandia National Laboratories
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
AIAA Scitech 2019 Forum
This is the second of three related conference papers focused on verifying and validating a CFD model for laminar hypersonic flows. The first paper deals with the code verification and solution verification activities. In this paper, we investigate whether the model can accurately simulate laminar, hypersonic experiments of flows over double-cones, conducted in CUBRC’s LENS-I and LENS-XX wind-tunnels. The approach is to use uncertainty quantification and sensitivity analysis, along with a careful examination of experimental uncertainties, to perform validation assessments. The validation assessments use metrics that probabilistically incorporate both parametric (i.e. freestream input) uncertainty and experimental uncertainty. Further validation assessments compare these uncertainties to iterative and convergence uncertainties described in the first paper in our series of related papers. As other researchers have found, the LENS-XX simulations under-predict experimental heat flux measurements in the laminar, attached region of the fore-cone. This is observed for a deterministic simulation, as well as a probabilistic approach to creating an ensemble of simulations derived from CUBRC-provided estimates of uncertainty for freestream conditions. This paper will conclude with possible reasons that simulations cannot bracket experimental observations, and motivate the third paper in our series, which will further examine these possible explanations. The results in this study emphasize the importance of careful measurement of experimental conditions and uncertainty quantification of validation experiments. This study, along with its sister papers, also demonstrates a process of verification, uncertainty quantification, and quantitative validation activities for building and assessing credibility of computational simulations.
AIAA Scitech 2019 Forum
We propose a probabilistic framework for assessing the consistency of an experimental dataset, i.e., whether the stated experimental conditions are consistent with the measurements provided. In case the dataset is inconsistent, our framework allows one to hypothesize and test sources of inconsistencies. This is crucial in model validation efforts. The framework relies on statistical inference to estimate experimental settings deemed untrustworthy, from measurements deemed accurate. The quality of the inferred variables is gauged by its ability to reproduce held-out experimental measurements; if the new predictions are closer to measurements than before, the cause of the discrepancy is deemed to have been found. The framework brings together recent advances in the use of Bayesian inference and statistical emulators in fluid dynamics with similarity measures for random variables to construct the hypothesis testing approach. We test the framework on two double-cone experiments executed in the LENS-XX wind tunnel and one in the LENS-I tunnel; all three have encountered difficulties when used in model validation exercises. However, the cause behind the difficulties with the LENS-I experiment is known, and our inferential framework recovers it. We also detect an inconsistency with one of the LENS-XX experiments, and hypothesize three causes for it. We check two of the hypotheses using our framework, and we find evidence that rejects them. We end by proposing that uncertainty quantification methods be used more widely to understand experiments and characterize facilities, and we cite three different methods to do so, the third of which we present in this paper.
Abstract not provided.
Conference Proceedings of the Society for Experimental Mechanics Series
Experiments are a critical part of the model validation process, and the credibility of the resulting simulations are themselves dependent on the credibility of the experiments. The impact of experimental credibility on model validation occurs at several points through the model validation and uncertainty quantification (MVUQ) process. Many aspects of experiments involved in the development and verification and validation (V&V) of computational simulations will impact the overall simulation credibility. In this document, we define experimental credibility in the context of model validation and decision making. We summarize possible elements for evaluating experimental credibility, sometimes drawing from existing and preliminary frameworks developed for evaluation of computational simulation credibility. The proposed framework is an expert elicitation tool for planning, assessing, and communicating the completeness and correctness of an experiment (“test”) in the context of its intended use—validation. The goals of the assessment are (1) to encourage early communication and planning between the experimentalist, computational analyst, and customer, and (2) the communication of experimental credibility. This assessment tool could also be used to decide between potential existing data sets to be used for validation. The evidence and story of experimental credibility will support the communication of overall simulation credibility.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.