Publications

Results 26–50 of 144

Search results

Jump to search filters

Bootstrapping and jackknife resampling to improve sparse-sample uq methods for tail probability estimation

ASME 2019 Verification and Validation Symposium, VVS 2019

Jekel, Charles F.; Romero, Vicente J.

Tolerance Interval Equivalent Normal (TI-EN) and Superdistribution (SD) sparse-sample uncertainty quantification (UQ) methods are used for conservative estimation of small tail probabilities. These methods are used to estimate the probability of a response laying beyond a specified threshold with limited data. The study focused on sparse-sample regimes ranging from N = 2 to 20 samples, because this is reflective of most experimental and some expensive computational situations. A tail probability magnitude of 10−4 was examined on four different distribution shapes, in order to be relevant for quantification of margins and uncertainty (QMU) problems that arise in risk and reliability analyses. In most cases the UQ methods were found to have optimal performance with a small number of samples, beyond which the performance deteriorated as samples were added. Using this observation, a generalized Jackknife resampling technique was developed to average many smaller subsamples. This improved the performance of the SD and TI-EN methods, specifically when a larger than optimal number of samples were available. A Complete Jackknifing technique, which considered all possible sub-sample combinations, was shown to perform better in most cases than an alternative Bootstrap resampling technique.

More Details

Real-space model validation and predictor-corrector extrapolation applied to the sandia cantilever beam end-to-end uq problem1

AIAA Scitech 2019 Forum

Romero, Vicente J.

This paper describes and demonstrates the Real Space (RS) model validation approach and the Predictor-Corrector (PC) approach to extrapolative prediction given model bias information from RS validation assessments against experimental data. The RS validation method quantifies model prediction bias of selected output scalar quantities of engineering interest (QOIs) in terms of directional bias error and any uncertainty thereof. Information in this form facilitates potential bias correction of predicted QOIs. The PC extrapolation approach maps a QOI-specific bias correction and related uncertainty into perturbation of one or more model parameters selected for most robust extrapolation of that QOI’s bias correction to prediction conditions away from the validation conditions. Such corrections are QOI dependent and not legitimate corrections or fixes to the physics model itself, so extrapolation of the bias correction to the prediction conditions is not expected to be perfect. Therefore, PC extrapolation employs both the perturbed and unperturbed models to estimate upper and lower bounds to the QOI correction that are scaled with extrapolation distance as measured by magnitude of change of the predicted QOI. An optional factor of safety on the uncertainty estimate for the predicted QOI also scales with the extrapolation. The RS-PC methodology is illustrated on a cantilever beam end-to-end uncertainty quantification (UQ) problem. Complementary “Discrete-Direct” model calibration and simple and effective sparse-data UQ methods feed into the RS and PC methods and round out a pragmatic and versatile systems approach to end-to-end UQ.

More Details

Simple effective conservative treatment of uncertainty from sparse samples of random functions

ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems. Part B. Mechanical Engineering

Romero, Vicente J.; Schroeder, Benjamin B.; Dempsey, James F.; Lewis, John R.; Breivik, Nicole L.; Orient, George E.; Antoun, Bonnie R.; Winokur, Justin W.; Glickman, Matthew R.; Red-Horse, John R.

This paper examines the variability of predicted responses when multiple stress-strain curves (reflecting variability from replicate material tests) are propagated through a finite element model of a ductile steel can being slowly crushed. Over 140 response quantities of interest (including displacements, stresses, strains, and calculated measures of material damage) are tracked in the simulations. Each response quantity’s behavior varies according to the particular stress-strain curves used for the materials in the model. We desire to estimate response variability when only a few stress-strain curve samples are available from material testing. Propagation of just a few samples will usually result in significantly underestimated response uncertainty relative to propagation of a much larger population that adequately samples the presiding random-function source. A simple classical statistical method, Tolerance Intervals, is tested for effectively treating sparse stress-strain curve data. The method is found to perform well on the highly nonlinear input-to-output response mappings and non-standard response distributions in the can-crush problem. The results and discussion in this paper support a proposition that the method will apply similarly well for other sparsely sampled random variable or function data, whether from experiments or models. Finally, the simple Tolerance Interval method is also demonstrated to be very economical.

More Details

Discrete-Direct Model Calibration and Propagation Approach Addressing Sparse Replicate Tests and Material, Geometric, and Measurement Uncertainties

SAE Technical Papers

Romero, Vicente J.

This paper introduces the "Discrete Direct" (DD) model calibration and uncertainty propagation approach for computational models calibrated to data from sparse replicate tests of stochastically varying systems. The DD approach generates and propagates various discrete realizations of possible calibration parameter values corresponding to possible realizations of the uncertain inputs and outputs of the experiments. This is in contrast to model calibration methods that attempt to assign or infer continuous probability density functions for the calibration parameters-which adds unjustified information to the calibration and propagation problem. The DD approach straightforwardly accommodates aleatory variabilities and epistemic uncertainties in system properties and behaviors, in input initial and boundary conditions, and in measurement uncertainties in the experiments. The approach appears to have several advantages over Bayesian and other calibration approaches for capturing and utilizing the information obtained from the typically small number of experiments in model calibration situations. In particular, the DD methodology better preserves the fundamental information from the experimental data in a way that enables model predictions to be more directly traced back to the supporting experimental data. The approach is also presently more viable for calibration involving sparse realizations of random function data (e.g. stress-strain curves) and random field data. The DD methodology is conceptually simpler than Bayesian calibration approaches, and is straightforward to implement. The methodology is demonstrated and analyzed in this paper on several illustrative calibration and uncertainty propagation problems.

More Details

A class of simple and effective UQ methods for sparse replicate data applied to the cantilever beam end-to-end UQ problem

AIAA Non-Deterministic Approaches Conference, 2018

Romero, Vicente J.; Weirs, Vincent G.

When very few samples of a random quantity are available from a source distribution or probability density function (PDF) of unknown shape, it is usually not possible to accurately infer the PDF from which the data samples come. Then a significant component of epistemic uncertainty exists concerning the source distribution of random or aleatory variability. For many engineering purposes, including design and risk analysis, one would normally want to avoid inference related under-estimation of important quantities such as response variance, and failure probabilities. Recent research has established the practicality and effectiveness of a class of simple and inexpensive UQ Methods for reasonable conservative estimation of such quantities when only sparse samples of a random quantity are available. This class of UQ methods is explained, demonstrated, and analyzed in this paper within the context of the Sandia Cantilever Beam End-to-End UQ Problem, Part A.1. Several sets of sparse replicate data are involved and several representative uncertainty quantities are to be estimated: A) beam deflection variability, in particular the 2.5 to 97.5 percentile “central 95%” range of the sparsely sampled PDF of deflection; and B) a small exceedance probability associated with a tail of the PDF integrated beyond a specified deflection tolerance.

More Details

A class of simple and effective UQ methods for sparse replicate data applied to the cantilever beam end-to-end UQ problem

AIAA Non-Deterministic Approaches Conference, 2018

Romero, Vicente J.; Weirs, Vincent G.

When very few samples of a random quantity are available from a source distribution or probability density function (PDF) of unknown shape, it is usually not possible to accurately infer the PDF from which the data samples come. Then a significant component of epistemic uncertainty exists concerning the source distribution of random or aleatory variability. For many engineering purposes, including design and risk analysis, one would normally want to avoid inference related under-estimation of important quantities such as response variance, and failure probabilities. Recent research has established the practicality and effectiveness of a class of simple and inexpensive UQ Methods for reasonable conservative estimation of such quantities when only sparse samples of a random quantity are available. This class of UQ methods is explained, demonstrated, and analyzed in this paper within the context of the Sandia Cantilever Beam End-to-End UQ Problem, Part A.1. Several sets of sparse replicate data are involved and several representative uncertainty quantities are to be estimated: A) beam deflection variability, in particular the 2.5 to 97.5 percentile “central 95%” range of the sparsely sampled PDF of deflection; and B) a small exceedance probability associated with a tail of the PDF integrated beyond a specified deflection tolerance.

More Details

Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

Romero, Vicente J.; Bonney, Matthew; Schroeder, Benjamin B.; Weirs, Vincent G.

When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

More Details

Validation Assessment of a Glass-to-Metal Seal Finite-Element Model

Jamison, Ryan D.; Buchheit, Thomas E.; Emery, John M.; Romero, Vicente J.; Stavig, Mark E.; Newton, Clay S.; Brown, Arthur B.

Sealing glasses are ubiquitous in high pressure and temperature engineering applications, such as hermetic feed-through electrical connectors. A common connector technology are glass-to-metal seals where a metal shell compresses a sealing glass to create a hermetic seal. Though finite-element analysis has been used to understand and design glass-to-metal seals for many years, there has been little validation of these models. An indentation technique was employed to measure the residual stress on the surface of a simple glass-to-metal seal. Recently developed rate- dependent material models of both Schott 8061 and 304L VAR stainless steel have been applied to a finite-element model of the simple glass-to-metal seal. Model predictions of residual stress based on the evolution of material models are shown. These model predictions are compared to measured data. Validity of the finite- element predictions is discussed. It will be shown that the finite-element model of the glass-to-metal seal accurately predicts the mean residual stress in the glass near the glass-to-metal interface and is valid for this quantity of interest.

More Details

Development of Uncertainty and Material Property "Truth" Models and Results for Cantilever Beam End-to-End UQ Problem

Schroeder, Benjamin B.; Romero, Vicente J.

Construction of a test problem that quantitatively tests the effectiveness and robustness of the many features and capabilities that a comprehensive E2E UQ framework should have is very challenging. Accordingly, this report illustrates many of the considerations and numerical investigations that went into the construction of the Sandia Cantilever Beam End-to-End UQ test problem.

More Details

Applicability Analysis of Validation Evidence for Biomedical Computational Models

Journal of Verification, Validation and Uncertainty Quantification

Pathmanathan, Pras; Gray, Richard A.; Romero, Vicente J.; Morrison, Tina M.

Computational modeling has the potential to revolutionize medicine the way it transformed engineering. However, despite decades of work, there has only been limited progress to successfully translate modeling research to patient care. One major difficulty which often occurs with biomedical computational models is an inability to perform validation in a setting that closely resembles how the model will be used. For example, for a biomedical model that makes in vivo clinically relevant predictions, direct validation of predictions may be impossible for ethical, technological, or financial reasons. Unavoidable limitations inherent to the validation process lead to challenges in evaluating the credibility of biomedical model predictions. Therefore, when evaluating biomedical models, it is critical to rigorously assess applicability, that is, the relevance of the computational model, and its validation evidence to the proposed context of use (COU). However, there are no well-established methods for assessing applicability. Here, we present a novel framework for performing applicability analysis and demonstrate its use with a medical device computational model. The framework provides a systematic, step-by-step method for breaking down the broad question of applicability into a series of focused questions, which may be addressed using supporting evidence and subject matter expertise. The framework can be used for model justification, model assessment, and validation planning. While motivated by biomedical models, it is relevant to a broad range of disciplines and underlying physics. The proposed applicability framework could help overcome some of the barriers inherent to validation of, and aid clinical implementation of, biomedical models.

More Details

Analyst-to-Analyst Variability in Simulation-Based Prediction

Glickman, Matthew R.; Romero, Vicente J.

This report describes findings from the culminating experiment of the LDRD project entitled, "Analyst-to-Analyst Variability in Simulation-Based Prediction". For this experiment, volunteer participants solving a given test problem in engineering and statistics were interviewed at different points in their solution process. These interviews are used to trace differing solutions to differing solution processes, and differing processes to differences in reasoning, assumptions, and judgments. The issue that the experiment was designed to illuminate -- our paucity of understanding of the ways in which humans themselves have an impact on predictions derived from complex computational simulations -- is a challenging and open one. Although solution of the test problem by analyst participants in this experiment has taken much more time than originally anticipated, and is continuing past the end of this LDRD, this project has provided a rare opportunity to explore analyst-to-analyst variability in significant depth, from which we derive evidence-based insights to guide further explorations in this important area.

More Details
Results 26–50 of 144
Results 26–50 of 144