Pragmatic Experiences integrating V&V into risk and qualification analysis of complex coupled systems
Abstract not provided.
Abstract not provided.
Abstract not provided.
A two-year effort focused on applying ASCI technology developed for the analysis of weapons systems to the state-of-the-art accident analysis of a nuclear reactor system was proposed. The Sandia SIERRA parallel computing platform for ASCI codes includes high-fidelity thermal, fluids, and structural codes whose coupling through SIERRA can be specifically tailored to the particular problem at hand to analyze complex multiphysics problems. Presently, however, the suite lacks several physics modules unique to the analysis of nuclear reactors. The NRC MELCOR code, not presently part of SIERRA, was developed to analyze severe accidents in present-technology reactor systems. We attempted to: (1) evaluate the SIERRA code suite for its current applicability to the analysis of next generation nuclear reactors, and the feasibility of implementing MELCOR models into the SIERRA suite, (2) examine the possibility of augmenting ASCI codes or alternatives by coupling to the MELCOR code, or portions thereof, to address physics particular to nuclear reactor issues, especially those facing next generation reactor designs, and (3) apply the coupled code set to a demonstration problem involving a nuclear reactor system. We were successful in completing the first two in sufficient detail to determine that an extensive demonstration problem was not feasible at this time. In the future, completion of this research would demonstrate the feasibility of performing high fidelity and rapid analyses of safety and design issues needed to support the development of next generation power reactor systems.
A very general and robust approach to solving optimization problems involving probabilistic uncertainty is through the use of Probabilistic Ordinal Optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the probabilistic merits of local design alternatives, rather than on crisp quantification of the alternatives. Thus, we simply ask the question: 'Is that alternative better or worse than this one?' to some level of statistical confidence we require, not: 'HOW MUCH better or worse is that alternative to this one?'. In this paper we illustrate an elementary application of probabilistic ordinal concepts in a 2-D optimization problem. Two uncertain variables contribute to uncertainty in the response function. We use a simple Coordinate Pattern Search non-gradient-based optimizer to step toward the statistical optimum in the design space. We also discuss more sophisticated implementations, and some of the advantages and disadvantages versus non-ordinal approaches for optimization under uncertainty.
A recently developed Centroidal Voronoi Tessellation (CVT) unstructured sampling method is investigated here to assess its suitability for use in statistical sampling and function integration. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-Dimensional parameter spaces. It has recently been shown on several 2-D test problems to provide superior point distributions for generating locally conforming response surfaces. In this paper, its performance as a statistical sampling and function integration method is compared to that of Latin-Hypercube Sampling (LHS) and Simple Random Sampling (SRS) Monte Carlo methods, and Halton and Hammersley quasi-Monte-Carlo sequence methods. Specifically, sampling efficiencies are compared for function integration and for resolving various statistics of response in a 2-D test problem. It is found that on balance CVT performs best of all these sampling methods on our test problems.
Abstract not provided.
Thermochimica Acta
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. The response variable was chosen as the steady-state decomposition front velocity. Four different analyses are presented, including (1) an analytical mean value (MV) analysis, (2) a linear surrogate response surface (LIN) using a constrained latin hypercube sampling (LHS) technique, (3) a quadratic surrogate response surface (QUAD) using LHS, and (4) a direct LHS (DLHS) analysis using the full grid and time step resolved finite element model. To minimize the numerical noise, 50 μm elements and approximately 1 ms time steps were required to obtain stable uncertainty results. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model. © 2002 Elsevier Science B.V. All rights reserved.
This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.
In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.
This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.
Abstract not provided.
Incomplete convergence in numerical simulation such as computational physics simulations and/or Monte Carlo simulations can enter into the calculation of the objective function in an optimization problem, producing noise, bias, and topo- graphical inaccuracy in the objective function. These affect accuracy and convergence rate in the optimization problem. This paper is concerned with global searching of a diverse parameter space, graduating to accelerated local convergence to a (hopefully) global optimum, in a framework that acknowledges convergence uncertainty and manages model resolu- tion to efficiently reduce uncertainty in the final optimum. In its own right, the global-to-local optimization engine employed here (devised for noise tolerance) performs better than other classical and contemporary optimization approaches tried individually and in combination on the "industrial" test problem to be presented.
Economic and political demands are driving computational investigation of systems and processes like never before. It is foreseen that questions of safety, optimality, risk, robustness, likelihood, credibility, etc. will increasingly be posed to computational modelers. This will require the development and routine use of computing infrastructure that incorporates computational physics models within the framework of larger meta-analyses involving aspects of optimization, nondeterministic analysis, and probabilistic risk assessment. This paper describes elements of an ongoing case study involving the computational solution of several meta-problems in optimization, nondeterministic analysis, and optimization under uncertainty pertaining to the surety of a generic weapon safing device. The goal of the analyses is to determine the worst-case heating configuration in a fire that most severely threatens the integrity of the device. A large, 3-D, nonlinear, finite element thermal model is used to determine the transient thermal response of the device in this coupled conduction/radiation problem. Implications of some of the numerical aspects of the thermal model on the selection of suitable and efficient optimization and nondeterministic analysis algorithms are discussed.
The concept of ``progressive Lattice Sampling`` as a basis for generating successive finite element response surfaces that are increasingly effective in matching actual response functions is investigated here. The goal is optimal response surface generation, which achieves an adequate representation of system behavior over the relevant parameter space of a problem with a minimum of computational and user effort. Such is important in global optimization and in estimation of system probabilistic response, which are both made much more viable by replacing large complex computer models of system behavior by fast running accurate approximations. This paper outlines the methodology for Finite Element/Lattice Sampling (FE/LS) response surface generation and examines the effectiveness of progressively refined FE/LS response surfaces in decoupled Monte Carlo analysis of several model problems. The proposed method is in all cases more efficient (generally orders of magnitude more efficient) than direct Monte Carlo evaluation, with no appreciable loss of accuracy. Thus, when arriving at probabilities or distributions by Monte Carlo, it appears to be more efficient to expend computer model function evaluations on building a FE/LS response surface than to expend them in direct Monte Carlo sampling. Furthermore, the marginal efficiency of the FE/LS decoupled Monte Carlo approach increases as the size of the computer model increases, which is a very favorable property.
Optimal response surface construction is being investigated as part of Sandia discretionary (LDRD) research into Analytic Nondeterministic Methods. The goal is to achieve an adequate representation of system behavior over the relevant parameter space of a problem with a minimum of computational and user effort. This is important in global optimization and in estimation of system probabilistic response, which are both made more viable by replacing large complex computer models with fast-running accurate and noiseless approximations. A Finite Element/Lattice Sampling (FE/LS) methodology for constructing progressively refined finite element response surfaces that reuse previous generations of samples is described here. Similar finite element implementations can be extended to N-dimensional problems and/or random fields and applied to other types of structured sampling paradigms, such as classical experimental design and Gauss, Lobatto, and Patterson sampling. Here the FE/LS model is applied in a ``decoupled`` Monte Carlo analysis of two sets of probability quantification test problems. The analytic test problems, spanning a large range of probabilities and very demanding failure region geometries, constitute a good testbed for comparing the performance of various nondeterministic analysis methods. In results here, FE/LS decoupled Monte Carlo analysis required orders of magnitude less computer time than direct Monte Carlo analysis, with no appreciable loss of accuracy. Thus, when arriving at probabilities or distributions by Monte Carlo, it appears to be more efficient to expend computer-model function evaluations on building a FE/LS response surface than to expend them in direct Monte Carlo sampling.
One emphasis of weapon surety (safety and security) at Sandia National Laboratories is the assessment of fire-related risk to weapon systems. New developments in computing hardware and software make possible the application of a new generation of very powerful analysis tools for surety assessment. This paper illustrates the application of some of these computational tools to assess the robustness of a conceptual firing set design in severe thermal environments. With these assessment tools, systematic interrogation of the parameter space governing the thermal robustness of the firing set has revealed much greater vulnerability than traditional ad hoc techniques had indicated. These newer techniques should be routinely applied in weapon design and assessment to produce more fully characterized and robust systems where weapon surety is paramount. As well as helping expose and quantify vulnerabilities in systems, these tools can be used in design and resource allocation processes to build safer, more reliable, more optimal systems.
A numerical model for simulating the transient nonlinear behavior of 2-D viscous sloshing flows in rectangular containers subjected to arbitrary horizontal accelerations is presented. The potential-flow formulation uses Rayleigh damping to approximate the effects of viscosity, and Lagrangian node movement is used to accommodate violent sloshing motions. A boundary element approach is used to efficiently handle the time-changing fluid geometry. Additionally, a corrected equation is presented for the constraint condition relating normal and tangential derivatives of the velocity potential where the fluid free surface meets the rigid container wall. The numerical model appears to be more accurate than previous sloshing models, as determined by comparison against exact analytic solutions and results of previously published models.
Thermal optimization procedures have been applied to determine the worst-case heating boundary conditions that a safety device can be credibly subjected to. There are many interesting aspects of this work in the areas of thermal transport, optimization, discrete modeling, and computing. The forward problem involves transient simulations with a nonlinear 3-D finite element model solving a coupled conduction/radiation problem. Coupling to the optimizer requires that boundary conditions in the thermal model be parameterized in terms of the optimization variables. The optimization is carried out over a diverse multi-dimensional parameter space where the forward evaluations are computationally expensive and of unknown duration a priori. The optimization problem is complicated by numerical artifacts resulting from discrete approximation and finite computer precision, as well as theoretical difficulties associated with navigating to a global minimum on a nonconvex objective function having a fold and several local minima. In this paper we report on the solution of the optimization problem, discuss implications of some of the features of this problem on selection of a suitable and efficient optimization algorithm, and share lessons learned, fixes implemented, and research issues identified along the way.
CIRCE2 is a computer code for modeling the optical performance of three-dimensional dish-type solar energy concentrators. Statistical methods are used to evaluate the directional distribution of reflected rays from any given point on the concentrator. Given concentrator and receiver geometries, sunshape (angular distribution of incident rays from the sun), and concentrator imperfections such as surface roughness and random deviation in slope, the code predicts the flux distribution and total power incident upon the target. Great freedom exists in the variety of concentrator and receiver configurations that can be modeled. Additionally, provisions for shading and receiver aperturing are included.- DEKGEN2 is a preprocessor designed to facilitate input of geometry, error distributions, and sun models. This manual describes the optical model, user inputs, code outputs, and operation of the software package. A user tutorial is included in which several collectors are built and analyzed in step-by-step examples.