Publications

Results 101–147 of 147

Search results

Jump to search filters

Comparison of several model validation conceptions against a "real space" end-to-end approach

SAE Technical Papers

Romero, Vicente J.

This paper1 explores some of the important considerations in devising a practical and consistent framework and methodology for working with experiments and experimental data in connection with modeling and prediction. The paper outlines a pragmatic and versatile "real-space" approach within which experimental and modeling uncertainties (correlated and uncorrelated, systematic and random, aleatory and epistemic) are treated to mitigate risk in modeling and prediction. The elements of data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. The considerations and options are many, and a large variety of viewpoints and precedents exist in the literature, as surveyed here. Rationale is given for the various choices taken in assembling the novel real-space end-to-end framework. The framework adopts some elements and constructs from the literature (sometimes adding needed refinement), rejects others (even some currently popular ones), and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various categories of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, structural mechanics, irradiated electronics, and combustion in fluids and solids.2. © 2011 SAE International.

More Details

Coupled thermal-mechanical experiments for validation of pressurized, high temperature systems

Dempsey, J.F.; Wellman, Gerald W.; Scherzinger, William M.; Connelly, Kevin; Romero, Vicente J.

Instrumented, fully coupled thermal-mechanical experiments were conducted to provide validation data for finite element simulations of failure in pressurized, high temperature systems. The design and implementation of the experimental methodology is described in another paper of this conference. Experimental coupling was accomplished on tubular 304L stainless steel specimens by mechanical loading imparted by internal pressurization and thermal loading by side radiant heating. Experimental parameters, including temperature and pressurization ramp rates, maximum temperature and pressure, phasing of the thermal and mechanical loading and specimen geometry details were studied. Experiments were conducted to increasing degrees of deformation, up to and including failure. Mechanical characterization experiments of the 304L stainless steel tube material was also completed for development of a thermal elastic-plastic material constitutive model used in the finite element simulations of the validation experiments. The material was characterized in tension at a strain rate of 0.001/s from room temperature to 800 C. The tensile behavior of the tube material was found to differ substantially from 304L bar stock material, with the plasticity characteristics and strain to failure differing at every test temperature.

More Details

Data & model conditioning for multivariate systematic uncertainty in model calibration, validation, and extrapolation

Romero, Vicente J.

This paper discusses implications and appropriate treatment of systematic uncertainty in experiments and modeling. Systematic uncertainty exists when experimental conditions, and/or measurement bias errors, and/or bias contributed by post-processing the data, are constant over the set of experiments but the particular values of the conditions and/or biases are unknown to within some specified uncertainty. Systematic uncertainties in experiments do not automatically show up in the output data, unlike random uncertainty which is revealed when multiple experiments are performed. Therefore, the output data must be properly 'conditioned' to reflect important sources of systematic uncertainty in the experiments. In industrial scale experiments the systematic uncertainty in experimental conditions (especially boundary conditions) is often large enough that the inference error on how the experimental system maps inputs to outputs is often quite substantial. Any such inference error and uncertainty thereof also has implications in model validation and calibration/conditioning; ignoring systematic uncertainty in experiments can lead to 'Type X' error in these procedures. Apart from any considerations of modeling and simulation, reporting of uncertainty associated with experimental results should include the effects of any significant systematic uncertainties in the experiments. This paper describes and illustrates the treatment of multivariate systematic uncertainties of interval and/or probabilistic natures, and combined cases. The paper also outlines a practical and versatile 'real-space' framework and methodology within which experimental and modeling uncertainties (correlated and uncorrelated, systematic and random, aleatory and epistemic) are treated to mitigate risk in model validation, calibration/conditioning, hierarchical modeling, and extrapolative prediction.

More Details

Application of a pragmatic interval-based "real space" approach to fire-model validation involving aleatory and epistemic uncertainty

Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference

Romero, Vicente J.; Luketa, Anay

This paper applies a pragmatic interval-based approach to validation of a fire dynamics model involving computational fluid dynamics, combustion, participating-media radiation, and heat transfer. Significant aleatory and epistemic sources of uncertainty exist in the experiments and simulations. The validation comparison of experimental and simulation results, and corresponding criteria and procedures for model affirmation or refutation, take place in "real space" as opposed to "difference space" where subtractive differences between experiments and simulations are assessed. The versatile model validation framework handles difficulties associated with representing and aggregating aleatory and epistemic uncertainties from multiple correlated and uncorrelated source types, including: • experimental variability from multiple repeat experiments • uncertainty of experimental inputs • experimental output measurement uncertainties • uncertainties that arise in data processing and inference from raw simulation and experiment outputs • parameter and model-form uncertainties intrinsic to the model • numerical solution uncertainty from model discretization effects. The framework and procedures of the model validation methodology are here applied to a difficult validation problem involving experimental and predicted calorimeter temperatures in a wind-driven hydrocarbon pool fire.

More Details

Validation and uncertainty quantification of Fuego simulations of calorimeter heating in a wind-driven hydrocarbon pool fire

Luketa, Anay; Romero, Vicente J.; Domino, Stefan P.; Glaze, David J.; Figueroa Faria, Victor G.

The objective of this work is to perform an uncertainty quantification (UQ) and model validation analysis of simulations of tests in the cross-wind test facility (XTF) at Sandia National Laboratories. In these tests, a calorimeter was subjected to a fire and the thermal response was measured via thermocouples. The UQ and validation analysis pertains to the experimental and predicted thermal response of the calorimeter. The calculations were performed using Sierra/Fuego/Syrinx/Calore, an Advanced Simulation and Computing (ASC) code capable of predicting object thermal response to a fire environment. Based on the validation results at eight diversely representative TC locations on the calorimeter the predicted calorimeter temperatures effectively bound the experimental temperatures. This post-validates Sandia's first integrated use of fire modeling with thermal response modeling and associated uncertainty estimates in an abnormal-thermal QMU analysis.

More Details

Efficiencies from spatially-correlated uncertainty and sampling in continuous-variable ordinal optimization

SAE International Journal of Materials and Manufacturing

Romero, Vicente J.

A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effect. One simply asks "Is that alternative better or worse than this one?"-not "HOW MUCH better or worse is that alternative to this one?" The answer to the latter question requires precise characterization of the uncertainty- with the corresponding sampling/integration expense for precise resolution. By looking at things from an ordinal ranking perspective instead, the trade-off between computational expense and vagueness in the uncertainty characterization can be managed to make cost-effective stepping decisions in the design space. This paper demonstrates correct advancement in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. It is explained and shown how spatial correlation of uncertainty in such design problems can be exploited to dramatically increase the efficiency of ordinal approaches to optimization under uncertainty.

More Details

Efficiencies from spatially-correlated uncertainty and sampling in continuous-variable ordinal optimization

SAE International Journal of Materials and Manufacturing

Romero, Vicente J.

A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effect. One simply asks "Is that alternative better or worse than this one?"-not "HOW MUCH better or worse is that alternative to this one?" The answer to the latter question requires precise characterization of the uncertainty- with the corresponding sampling/integration expense for precise resolution. By looking at things from an ordinal ranking perspective instead, the trade-off between computational expense and vagueness in the uncertainty characterization can be managed to make cost-effective stepping decisions in the design space. This paper demonstrates correct advancement in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. It is explained and shown how spatial correlation of uncertainty in such design problems can be exploited to dramatically increase the efficiency of ordinal approaches to optimization under uncertainty.

More Details

Type X and y errors and data & model conditioning for systematic uncertainty in model calibration, validation, and extrapolation

SAE Technical Papers

Romero, Vicente J.

This paper introduces and develops the concept of "Type X" and "Type Y" errors in model validation and calibration, and their implications on extrapolative prediction. Type X error is non-detection of model bias because it is effectively hidden by the uncertainty in the experiments. Possible deleterious effects of Type X error can be avoided by mapping uncertainty into the model until it envelopes the potential model bias, but this likely assigns a larger uncertainty than is needed to account for the actual bias (Type Y error). A philosophy of Best Estimate + Uncertainty modeling and prediction is probably best supported by taking the conservative choice of guarding against Type X error while accepting the downside of incurring Type Y error. An associated methodology involving data-and model-conditioning is presented and tested on a simple but rich test problem. The methodology is shown to appropriately contend with model bias under conditions of systematic experimental input uncertainty in the test problem. The methodology effectively bounds the uncertain model bias and brings a correction into the model that extrapolates very well under a large variety of extrapolation conditions. The methodology has been straightforwardly applied to considerably more complex real problems where system response is likewise jointly monotonic in the input uncertainties. The methodology also allows for other types of systematic and random uncertainty in the experiments and model as discussed herein. Copyright © 2008 SAE International.

More Details

A paradigm of model validation and validated models for best-estimate-plus-uncertainty predictions in systems engineering

SAE Technical Papers

Romero, Vicente J.

What constitutes a validated model? What are the criteria that allow one to defensibly make the claim that they are using a validated model in an analysis? These questions get to the heart of what model validation really implies (conceptually, operationally, interpretationally, etc.), and these details are currently the subject of substantial debate in the V&V community. This is perhaps because many contemporary paradigms of model validation have a limited modeling scope in mind, so the validation paradigms do not span different modeling regimes and purposes that are important in engineering. This paper discusses the different modeling regimes and purposes that it is important for a validation theory to span, and then proposes a validation paradigm that appears to span them. The author's criterion for validated models proceeds from a desire to meet an end objective of "best estimate plus uncertainty" (BEPU) in model predictions. Starting from this end, the author works back to the implications on the model validation process (conceptually, operationally, interpretationally, etc.). Ultimately a shift is required in the conceptualization and articulation of model validation, away from contemporary paradigms. Thus, this paper points out weaknesses in contemporary model validation perspectives and proposes a conception of model validation and validated models that seems to reconcile many of the issues. Copyright © 2007 SAE International.

More Details

Development of a new adaptive ordinal approach to continuous-variable probabilistic optimization

Romero, Vicente J.

A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effects. One simply asks ''Is that alternative better or worse than this one?'' -not ''HOW MUCH better or worse is that alternative to this one?'' The answer to the latter question requires precise characterization of the uncertainty--with the corresponding sampling/integration expense for precise resolution. However, in this report we demonstrate correct decision-making in a continuous-variable probabilistic optimization problem despite extreme vagueness in the statistical characterization of the design options. We present a new adaptive ordinal method for probabilistic optimization in which the trade-off between computational expense and vagueness in the uncertainty characterization can be conveniently managed in various phases of the optimization problem to make cost-effective stepping decisions in the design space. Spatial correlation of uncertainty in the continuous-variable design space is exploited to dramatically increase method efficiency. Under many circumstances the method appears to have favorable robustness and cost-scaling properties relative to other probabilistic optimization methods, and uniquely has mechanisms for quantifying and controlling error likelihood in design-space stepping decisions. The method is asymptotically convergent to the true probabilistic optimum, so could be useful as a reference standard against which the efficiency and robustness of other methods can be compared--analogous to the role that Monte Carlo simulation plays in uncertainty propagation.

More Details

Advanced nuclear energy analysis technology

Young, Michael F.; Murata, Kenneth K.; Romero, Vicente J.; Gauntt, Randall O.; Rochau, Gary E.

A two-year effort focused on applying ASCI technology developed for the analysis of weapons systems to the state-of-the-art accident analysis of a nuclear reactor system was proposed. The Sandia SIERRA parallel computing platform for ASCI codes includes high-fidelity thermal, fluids, and structural codes whose coupling through SIERRA can be specifically tailored to the particular problem at hand to analyze complex multiphysics problems. Presently, however, the suite lacks several physics modules unique to the analysis of nuclear reactors. The NRC MELCOR code, not presently part of SIERRA, was developed to analyze severe accidents in present-technology reactor systems. We attempted to: (1) evaluate the SIERRA code suite for its current applicability to the analysis of next generation nuclear reactors, and the feasibility of implementing MELCOR models into the SIERRA suite, (2) examine the possibility of augmenting ASCI codes or alternatives by coupling to the MELCOR code, or portions thereof, to address physics particular to nuclear reactor issues, especially those facing next generation reactor designs, and (3) apply the coupled code set to a demonstration problem involving a nuclear reactor system. We were successful in completing the first two in sufficient detail to determine that an extensive demonstration problem was not feasible at this time. In the future, completion of this research would demonstrate the feasibility of performing high fidelity and rapid analyses of safety and design issues needed to support the development of next generation power reactor systems.

More Details

Application of probabilistic ordinal optimization concepts to a continuous-variable probabilistic optimization problem

Romero, Vicente J.

A very general and robust approach to solving optimization problems involving probabilistic uncertainty is through the use of Probabilistic Ordinal Optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the probabilistic merits of local design alternatives, rather than on crisp quantification of the alternatives. Thus, we simply ask the question: 'Is that alternative better or worse than this one?' to some level of statistical confidence we require, not: 'HOW MUCH better or worse is that alternative to this one?'. In this paper we illustrate an elementary application of probabilistic ordinal concepts in a 2-D optimization problem. Two uncertain variables contribute to uncertainty in the response function. We use a simple Coordinate Pattern Search non-gradient-based optimizer to step toward the statistical optimum in the design space. We also discuss more sophisticated implementations, and some of the advantages and disadvantages versus non-ordinal approaches for optimization under uncertainty.

More Details

Initial evaluation of Centroidal Voronoi Tessellation method for statistical sampling and function integration

Romero, Vicente J.; Gunzburger, Max D.

A recently developed Centroidal Voronoi Tessellation (CVT) unstructured sampling method is investigated here to assess its suitability for use in statistical sampling and function integration. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-Dimensional parameter spaces. It has recently been shown on several 2-D test problems to provide superior point distributions for generating locally conforming response surfaces. In this paper, its performance as a statistical sampling and function integration method is compared to that of Latin-Hypercube Sampling (LHS) and Simple Random Sampling (SRS) Monte Carlo methods, and Halton and Hammersley quasi-Monte-Carlo sequence methods. Specifically, sampling efficiencies are compared for function integration and for resolving various statistics of response in a 2-D test problem. It is found that on balance CVT performs best of all these sampling methods on our test problems.

More Details

Uncertainty analysis of decomposing polyurethane foam

Thermochimica Acta

Hobbs, Michael L.; Romero, Vicente J.

Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. The response variable was chosen as the steady-state decomposition front velocity. Four different analyses are presented, including (1) an analytical mean value (MV) analysis, (2) a linear surrogate response surface (LIN) using a constrained latin hypercube sampling (LHS) technique, (3) a quadratic surrogate response surface (QUAD) using LHS, and (4) a direct LHS (DLHS) analysis using the full grid and time step resolved finite element model. To minimize the numerical noise, 50 μm elements and approximately 1 ms time steps were required to obtain stable uncertainty results. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model. © 2002 Elsevier Science B.V. All rights reserved.

More Details

Description of the Sandia Validation Metrics Project

Trucano, Timothy G.; Easterling, Robert G.; Dowding, Kevin J.; Paez, Thomas L.; Urbina, Angel U.; Romero, Vicente J.; Rutherford, Brian; Hills, Richard G.

This report describes the underlying principles and goals of the Sandia ASCI Verification and Validation Program Validation Metrics Project. It also gives a technical description of two case studies, one in structural dynamics and the other in thermomechanics, that serve to focus the technical work of the project in Fiscal Year 2001.

More Details

Effect of initial seed and number of samples on simple-random and Latin-Hypercube Monte Carlo probabilities (confidence interval considerations)

Romero, Vicente J.

In order to devise an algorithm for autonomously terminating Monte Carlo sampling when sufficiently small and reliable confidence intervals (CI) are achieved on calculated probabilities, the behavior of CI estimators must be characterized. This knowledge is also required in comparing the accuracy of other probability estimation techniques to Monte Carlo results. Based on 100 trials in a hypothesis test, estimated 95% CI from classical approximate CI theory are empirically examined to determine if they behave as true 95% CI over spectrums of probabilities (population proportions) ranging from 0.001 to 0.99 in a test problem. Tests are conducted for population sizes of 500 and 10,000 samples where applicable. Significant differences between true and estimated 95% CI are found to occur at probabilities between 0.1 and 0.9, such that estimated 95% CI can be rejected as not being true 95% CI at less than a 40% chance of incorrect rejection. With regard to Latin Hypercube sampling (LHS), though no general theory has been verified for accurately estimating LHS CI, recent numerical experiments on the test problem have found LHS to be conservatively over an order of magnitude more efficient than SRS for similar sized CI on probabilities ranging between 0.25 and 0.75. The efficiency advantage of LHS vanishes, however, as the probability extremes of 0 and 1 are approached.

More Details

Application of finite element, global polynomial, and kriging response surfaces in Progressive Lattice Sampling designs

Romero, Vicente J.; Swiler, Laura P.; Giunta, Anthony A.

This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.

More Details

Efficient Global Optimization Under Conditions of Noise and Uncertainty - A Multi-Model Multi-Grid Windowing Approach

Romero, Vicente J.

Incomplete convergence in numerical simulation such as computational physics simulations and/or Monte Carlo simulations can enter into the calculation of the objective function in an optimization problem, producing noise, bias, and topo- graphical inaccuracy in the objective function. These affect accuracy and convergence rate in the optimization problem. This paper is concerned with global searching of a diverse parameter space, graduating to accelerated local convergence to a (hopefully) global optimum, in a framework that acknowledges convergence uncertainty and manages model resolu- tion to efficiently reduce uncertainty in the final optimum. In its own right, the global-to-local optimization engine employed here (devised for noise tolerance) performs better than other classical and contemporary optimization approaches tried individually and in combination on the "industrial" test problem to be presented.

More Details

Optimization and nondeterministic analysis with large simulation models: Issues and directions

Romero, Vicente J.

Economic and political demands are driving computational investigation of systems and processes like never before. It is foreseen that questions of safety, optimality, risk, robustness, likelihood, credibility, etc. will increasingly be posed to computational modelers. This will require the development and routine use of computing infrastructure that incorporates computational physics models within the framework of larger meta-analyses involving aspects of optimization, nondeterministic analysis, and probabilistic risk assessment. This paper describes elements of an ongoing case study involving the computational solution of several meta-problems in optimization, nondeterministic analysis, and optimization under uncertainty pertaining to the surety of a generic weapon safing device. The goal of the analyses is to determine the worst-case heating configuration in a fire that most severely threatens the integrity of the device. A large, 3-D, nonlinear, finite element thermal model is used to determine the transient thermal response of the device in this coupled conduction/radiation problem. Implications of some of the numerical aspects of the thermal model on the selection of suitable and efficient optimization and nondeterministic analysis algorithms are discussed.

More Details

Efficient Monte Carlo probability estimation with finite element response surfaces built from progressive lattice sampling

Romero, Vicente J.

The concept of ``progressive Lattice Sampling`` as a basis for generating successive finite element response surfaces that are increasingly effective in matching actual response functions is investigated here. The goal is optimal response surface generation, which achieves an adequate representation of system behavior over the relevant parameter space of a problem with a minimum of computational and user effort. Such is important in global optimization and in estimation of system probabilistic response, which are both made much more viable by replacing large complex computer models of system behavior by fast running accurate approximations. This paper outlines the methodology for Finite Element/Lattice Sampling (FE/LS) response surface generation and examines the effectiveness of progressively refined FE/LS response surfaces in decoupled Monte Carlo analysis of several model problems. The proposed method is in all cases more efficient (generally orders of magnitude more efficient) than direct Monte Carlo evaluation, with no appreciable loss of accuracy. Thus, when arriving at probabilities or distributions by Monte Carlo, it appears to be more efficient to expend computer model function evaluations on building a FE/LS response surface than to expend them in direct Monte Carlo sampling. Furthermore, the marginal efficiency of the FE/LS decoupled Monte Carlo approach increases as the size of the computer model increases, which is a very favorable property.

More Details

Finite-element/progressive-lattice-sampling response surface methodology and application to benchmark probability quantification problems

Romero, Vicente J.

Optimal response surface construction is being investigated as part of Sandia discretionary (LDRD) research into Analytic Nondeterministic Methods. The goal is to achieve an adequate representation of system behavior over the relevant parameter space of a problem with a minimum of computational and user effort. This is important in global optimization and in estimation of system probabilistic response, which are both made more viable by replacing large complex computer models with fast-running accurate and noiseless approximations. A Finite Element/Lattice Sampling (FE/LS) methodology for constructing progressively refined finite element response surfaces that reuse previous generations of samples is described here. Similar finite element implementations can be extended to N-dimensional problems and/or random fields and applied to other types of structured sampling paradigms, such as classical experimental design and Gauss, Lobatto, and Patterson sampling. Here the FE/LS model is applied in a ``decoupled`` Monte Carlo analysis of two sets of probability quantification test problems. The analytic test problems, spanning a large range of probabilities and very demanding failure region geometries, constitute a good testbed for comparing the performance of various nondeterministic analysis methods. In results here, FE/LS decoupled Monte Carlo analysis required orders of magnitude less computer time than direct Monte Carlo analysis, with no appreciable loss of accuracy. Thus, when arriving at probabilities or distributions by Monte Carlo, it appears to be more efficient to expend computer-model function evaluations on building a FE/LS response surface than to expend them in direct Monte Carlo sampling.

More Details

Making use of optimization, nondeterministic analysis, and numerical simulation to assess firing set robustness in a fire

Romero, Vicente J.

One emphasis of weapon surety (safety and security) at Sandia National Laboratories is the assessment of fire-related risk to weapon systems. New developments in computing hardware and software make possible the application of a new generation of very powerful analysis tools for surety assessment. This paper illustrates the application of some of these computational tools to assess the robustness of a conceptual firing set design in severe thermal environments. With these assessment tools, systematic interrogation of the parameter space governing the thermal robustness of the firing set has revealed much greater vulnerability than traditional ad hoc techniques had indicated. These newer techniques should be routinely applied in weapon design and assessment to produce more fully characterized and robust systems where weapon surety is paramount. As well as helping expose and quantify vulnerabilities in systems, these tools can be used in design and resource allocation processes to build safer, more reliable, more optimal systems.

More Details

A numerical model of 2-D sloshing of pseudo-viscous liquids in horizontally accelerated rectangular containers

Romero, Vicente J.

A numerical model for simulating the transient nonlinear behavior of 2-D viscous sloshing flows in rectangular containers subjected to arbitrary horizontal accelerations is presented. The potential-flow formulation uses Rayleigh damping to approximate the effects of viscosity, and Lagrangian node movement is used to accommodate violent sloshing motions. A boundary element approach is used to efficiently handle the time-changing fluid geometry. Additionally, a corrected equation is presented for the constraint condition relating normal and tangential derivatives of the velocity potential where the fluid free surface meets the rigid container wall. The numerical model appears to be more accurate than previous sloshing models, as determined by comparison against exact analytic solutions and results of previously published models.

More Details

Application of optimization to the inverse problem of finding the worst-case heating configuration in a fire

Romero, Vicente J.

Thermal optimization procedures have been applied to determine the worst-case heating boundary conditions that a safety device can be credibly subjected to. There are many interesting aspects of this work in the areas of thermal transport, optimization, discrete modeling, and computing. The forward problem involves transient simulations with a nonlinear 3-D finite element model solving a coupled conduction/radiation problem. Coupling to the optimizer requires that boundary conditions in the thermal model be parameterized in terms of the optimization variables. The optimization is carried out over a diverse multi-dimensional parameter space where the forward evaluations are computationally expensive and of unknown duration a priori. The optimization problem is complicated by numerical artifacts resulting from discrete approximation and finite computer precision, as well as theoretical difficulties associated with navigating to a global minimum on a nonconvex objective function having a fold and several local minima. In this paper we report on the solution of the optimization problem, discuss implications of some of the features of this problem on selection of a suitable and efficient optimization algorithm, and share lessons learned, fixes implemented, and research issues identified along the way.

More Details

CIRCE2/DEKGEN2: A software package for facilitated optical analysis of 3-D distributed solar energy concentrators. Theory and user manual

Romero, Vicente J.

CIRCE2 is a computer code for modeling the optical performance of three-dimensional dish-type solar energy concentrators. Statistical methods are used to evaluate the directional distribution of reflected rays from any given point on the concentrator. Given concentrator and receiver geometries, sunshape (angular distribution of incident rays from the sun), and concentrator imperfections such as surface roughness and random deviation in slope, the code predicts the flux distribution and total power incident upon the target. Great freedom exists in the variety of concentrator and receiver configurations that can be modeled. Additionally, provisions for shading and receiver aperturing are included.- DEKGEN2 is a preprocessor designed to facilitate input of geometry, error distributions, and sun models. This manual describes the optical model, user inputs, code outputs, and operation of the software package. A user tutorial is included in which several collectors are built and analyzed in step-by-step examples.

More Details

CIRCE2/DEKGEN2. A software package for facilitated optical analysis of 3-D distributed solar energy concentrators

undefined

Romero, Vicente J.

CIRCE2 is a cone-optics computer code for determining the flux distribution and total incident power upon a receiver, given concentrator and receiver geometries, sunshape (angular distribution of incident rays from the sun-disk), and concentrator imperfections such as surface roughness and random deviation in slope. Statistical methods are used to evaluate the directional distribution of reflected rays from any given point on the concentrator, whence the contribution to any point on the target can be obtained. DEKGEN2 is an interactive preprocessor which facilitates specification of geometry, sun models, and error distributions. The CIRCE2/DEKGEN2 package equips solar energy engineers with a quick, user-friendly design and analysis tool for study/optimization of dish-type distributed receiver systems. The package exhibits convenient features for analysis of 'conventional' concentrators, and has the generality required to investigate complex and unconventional designs. Among the more advanced features are the ability to model dish or faceted concentrators and stretched-membrane reflectors, and to analyze 3-D flux distributions on internal or external receivers with 3-D geometries. Facets of rectangular, triangular, or circular projected shape, with profiles of parabolic, spherical, flat, or custom curvature can be handled. Provisions for shading, blocking, and aperture specification are also included. This paper outlines the features and capabilities of the new package, as well as the theory and numerical models employed in CIRCE2.

More Details
Results 101–147 of 147
Results 101–147 of 147