The study of heat transfer and ablation plays an important role in many problems of scientific and engineering interest. As with the computational simulation of any physical phenomenon, the first step towards establishing credibility in ablation simulations involves code verification. Code verification is typically performed using exact and manufactured solutions. However, manufactured solutions generally require the invasive introduction of an artificial forcing term within the source code, such that the code solves a modified problem for which the solution is known. In this paper, we present a nonintrusive method for manufacturing solutions for a non-decomposing ablation code, which does not require the addition of a source term.
The study of heat transfer and ablation plays an important role in many problems of scientific and engineering interest. As with the computational simulation of any physical phenomenon, the first step toward establishing credibility in ablation simulations involves code verification. Code verification is typically performed using exact and manufactured solutions. However, manufactured solutions generally require the invasive introduction of an artificial forcing term within the source code such that the code solves a modified problem for which the solution is known. In this paper, we present a nonintrusive method for manufacturing solutions for a non-decomposing ablation code, which does not require the addition of a source term.
The study of heat transfer and ablation plays an important role in many problems of scientific and engineering interest. As with the computational simulation of any physical phenomenon, the first step toward establishing credibility in ablation simulations involves code verification. Code verification is typically performed using exact and manufactured solutions. However, manufactured solutions generally require the invasive introduction of an artificial forcing term within the source code such that the code solves a modified problem for which the solution is known. In this paper, we present a nonintrusive method for manufacturing solutions for a non-decomposing ablation code, which does not require the addition of a source term.
In this paper, we characterize the logarithmic singularities arising in the method of moments from the Green’s function in integrals over the test domain, and we use two approaches for designing geometrically symmetric quadrature rules to integrate these singular integrands. These rules exhibit better convergence properties than quadrature rules for polynomials and, in general, lead to better accuracy with a lower number of quadrature points. In this work, we demonstrate their effectiveness for several examples encountered in both the scalar and vector potentials of the electric-field integral equation (singular, near-singular, and far interactions) as compared to the commonly employed polynomial scheme and the double Ma–Rokhlin–Wandzura (DMRW) rules, whose sample points are located asymmetrically within triangles.
Gemma verification activities for FY20 can be divided into three categories: the development of specialized quadrature rules, initial progress towards the development of manufactured solutions for code verification, and automated code-verification testing. In the method-of-moments implementation of the electric-field integral equation, the presence of a Green’s function in the four-dimensional integrals yields singularities in the integrand when two elements are nearby. To address these challenges, we have developed quadrature rules to integrate the functions through which the singularities can be characterized. Code verification is necessary to develop confidence in the implementation of the numerical methods in Gemma. Therefore, we have begun investigating the use of manufactured solutions to more thoroughly verify Gemma. Manufactured solutions provide greater flexibility for testing aspects of the code; however, the aforementioned singularities provide challenges, and existing work is limited in rigor and quantity. Finally, we have implemented automated code-verification testing using the VVTest framework to automate the mesh refinement and execution of a Gemma simulation to generate mesh convergence data. This infrastructure computes the observed order of accuracy from these data and compares it with the theoretical order of accuracy to either develop confidence in the implementation of the numerical methods or detect coding errors.
The study of hypersonic flows and their underlying aerothermochemical reactions is particularly important in the design and analysis of vehicles exiting and reentering Earth's atmosphere. Computational physics codes can be employed to simulate these phenomena; however, code verification of these codes is necessary to certify their credibility. To date, few approaches have been presented for verifying codes that simulate hypersonic flows, especially flows reacting in thermochemical nonequilibrium. In this work, we present our code-verification techniques for verifying the spatial accuracy and thermochemical source term in hypersonic reacting flows in thermochemical nonequilibrium. Additionally, we demonstrate the effectiveness of these techniques on the Sandia Parallel Aerodynamics and Reentry Code (SPARC).
Despite extensive research on symmetric polynomial quadrature rules for triangles, as well as approaches to their calculation, few studies have focused on non-polynomial functions, particularly on their integration using symmetric triangle rules. In this paper, we present two approaches to computing symmetric triangle rules for singular integrands by developing rules that can integrate arbitrary functions. The first approach is well suited for a moderate amount of points and retains much of the efficiency of polynomial quadrature rules. The second approach better addresses large amounts of points, though it is less efficient than the first approach. We demonstrate the effectiveness of both approaches on singular integrands, which can often yield relative errors two orders of magnitude less than those from polynomial quadrature rules.
This work proposes a machine-learning framework for constructing statistical models of errors incurred by approximate solutions to parameterized systems of nonlinear equations. These approximate solutions may arise from early termination of an iterative method, a lower-fidelity model, or a projection-based reduced-order model, for example. The proposed statistical model comprises the sum of a deterministic regression-function model and a stochastic noise model. The method constructs the regression-function model by applying regression techniques from machine learning (e.g., support vector regression, artificial neural networks) to map features (i.e., error indicators such as sampled elements of the residual) to a prediction of the approximate-solution error. The method constructs the noise model as a mean-zero Gaussian random variable whose variance is computed as the sample variance of the approximate-solution error on a test set; this variance can be interpreted as the epistemic uncertainty introduced by the approximate solution. This work considers a wide range of feature-engineering methods, data-set-construction techniques, and regression techniques that aim to ensure that (1) the features are cheaply computable, (2) the noise model exhibits low variance (i.e., low epistemic uncertainty introduced), and (3) the regression model generalizes to independent test data. Numerical experiments performed on several computational-mechanics problems and types of approximate solutions demonstrate the ability of the method to generate statistical models of the error that satisfy these criteria and significantly outperform more commonly adopted approaches for error modeling.
This work proposes a machine-learning framework for constructing statistical models of errors incurred by approximate solutions to parameterized systems of nonlinear equations. These approximate solutions may arise from early termination of an iterative method, a lower-fidelity model, or a projection-based reduced-order model, for example. The proposed statistical model comprises the sum of a deterministic regression-function model and a stochastic noise model. The method constructs the regression-function model by applying regression techniques from machine learning (e.g., support vector regression, artificial neural networks) to map features (i.e., error indicators such as sampled elements of the residual) to a prediction of the approximate-solution error. The method constructs the noise model as a mean-zero Gaussian random variable whose variance is computed as the sample variance of the approximate-solution error on a test set; this variance can be interpreted as the epistemic uncertainty introduced by the approximate solution. This work considers a wide range of feature-engineering methods, data-set-construction techniques, and regression techniques that aim to ensure that (1) the features are cheaply computable, (2) the noise model exhibits low variance (i.e., low epistemic uncertainty introduced), and (3) the regression model generalizes to independent test data. Numerical experiments performed on several computational-mechanics problems and types of approximate solutions demonstrate the ability of the method to generate statistical models of the error that satisfy these criteria and significantly outperform more commonly adopted approaches for error modeling.
The study of hypersonic flows and their underlying aerothermochemical reactions is particularly important in the design and analysis of vehicles exiting and reentering Earth’s atmosphere. Computational physics codes can be employed to simulate these phenomena; however, code verification of these codes is necessary to certify their credibility. To date, few approaches have been presented for verifying codes that simulate hypersonic flows, especially flows reacting in thermochemical nonequilibrium. In this paper, we present our code-verification techniques for hypersonic reacting flows in thermochemical nonequilibrium, as well as their deployment in the Sandia Parallel Aerodynamics and Reentry Code (SPARC).
We propose a probabilistic framework for assessing the consistency of an experimental dataset, i.e., whether the stated experimental conditions are consistent with the measurements provided. In case the dataset is inconsistent, our framework allows one to hypothesize and test sources of inconsistencies. This is crucial in model validation efforts. The framework relies on statistical inference to estimate experimental settings deemed untrustworthy, from measurements deemed accurate. The quality of the inferred variables is gauged by its ability to reproduce held-out experimental measurements; if the new predictions are closer to measurements than before, the cause of the discrepancy is deemed to have been found. The framework brings together recent advances in the use of Bayesian inference and statistical emulators in fluid dynamics with similarity measures for random variables to construct the hypothesis testing approach. We test the framework on two double-cone experiments executed in the LENS-XX wind tunnel and one in the LENS-I tunnel; all three have encountered difficulties when used in model validation exercises. However, the cause behind the difficulties with the LENS-I experiment is known, and our inferential framework recovers it. We also detect an inconsistency with one of the LENS-XX experiments, and hypothesize three causes for it. We check two of the hypotheses using our framework, and we find evidence that rejects them. We end by proposing that uncertainty quantification methods be used more widely to understand experiments and characterize facilities, and we cite three different methods to do so, the third of which we present in this paper.
This project will enable high-fidelity aerothermal simulations of hypersonic vehicles to be employed (1) to generate large databases with quantified uncertainties and (2) for rapid interactive simulation. The databases will increase the volume/quality of A4H data; rapid interactive simulation can enable arbitrary conditions/designs to be simulated on demand. We will achieve this by applying reduced-order-modeling techniques to aerothermal simulations.