Publications

Results 1–200 of 335
Skip to search filters

What can simulation test beds teach us about social science? Results of the ground truth program

Computational and Mathematical Organization Theory

Naugle, Asmeret B.; Krofcheck, Daniel J.; Warrender, Christina E.; Lakkaraju, Kiran L.; Swiler, Laura P.; Verzi, Stephen J.; Emery, Ben; Murdock, Jaimie; Bernard, Michael L.; Romero, Vicente J.

The ground truth program used simulations as test beds for social science research methods. The simulations had known ground truth and were capable of producing large amounts of data. This allowed research teams to run experiments and ask questions of these simulations similar to social scientists studying real-world systems, and enabled robust evaluation of their causal inference, prediction, and prescription capabilities. We tested three hypotheses about research effectiveness using data from the ground truth program, specifically looking at the influence of complexity, causal understanding, and data collection on performance. We found some evidence that system complexity and causal understanding influenced research performance, but no evidence that data availability contributed. The ground truth program may be the first robust coupling of simulation test beds with an experimental framework capable of teasing out factors that determine the success of social science research.

More Details

Feedback density and causal complexity of simulation model structure

Journal of Simulation

Naugle, Asmeret B.; Verzi, Stephen J.; Lakkaraju, Kiran L.; Swiler, Laura P.; Warrender, Christina E.; Bernard, Michael L.; Romero, Vicente J.

Measures of simulation model complexity generally focus on outputs; we propose measuring the complexity of a model’s causal structure to gain insight into its fundamental character. This article introduces tools for measuring causal complexity. First, we introduce a method for developing a model’s causal structure diagram, which characterises the causal interactions present in the code. Causal structure diagrams facilitate comparison of simulation models, including those from different paradigms. Next, we develop metrics for evaluating a model’s causal complexity using its causal structure diagram. We discuss cyclomatic complexity as a measure of the intricacy of causal structure and introduce two new metrics that incorporate the concept of feedback, a fundamental component of causal structure. The first new metric introduced here is feedback density, a measure of the cycle-based interconnectedness of causal structure. The second metric combines cyclomatic complexity and feedback density into a comprehensive causal complexity measure. Finally, we demonstrate these complexity metrics on simulation models from multiple paradigms and discuss potential uses and interpretations. These tools enable direct comparison of models across paradigms and provide a mechanism for measuring and discussing complexity based on a model’s fundamental assumptions and design.

More Details

GDSA Framework Development and Process Model Integration FY2022

Mariner, Paul M.; Debusschere, Bert D.; Fukuyama, David E.; Harvey, Jacob H.; LaForce, Tara; Leone, Rosemary C.; Perry, Frank V.; Swiler, Laura P.; TACONI, ANNA M.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (Sassani et al. 2021). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media. This report describes fiscal year (FY) 2022 advances of the Geologic Disposal Safety Assessment (GDSA) performance assessment (PA) development groups of the SFWST Campaign. The common mission of these groups is to develop a geologic disposal system modeling capability for nuclear waste that can be used to assess probabilistically the performance of generic disposal options and generic sites. The modeling capability under development is called GDSA Framework (pa.sandia.gov). GDSA Framework is a coordinated set of codes and databases designed for probabilistically simulating the release and transport of disposed radionuclides from a repository to the biosphere for post-closure performance assessment. Primary components of GDSA Framework include PFLOTRAN to simulate the major features, events, and processes (FEPs) over time, Dakota to propagate uncertainty and analyze sensitivities, meshing codes to define the domain, and various other software for rendering properties, processing data, and visualizing results.

More Details

Sensitivity analysis of generic deep geologic repository with focus on spatial heterogeneity induced by stochastic fracture network generation

Advances in Water Resources

Brooks, Dusty M.; Swiler, Laura P.; Stein, Emily S.; Mariner, Paul M.; Basurto, Eduardo B.; Portone, Teresa P.; Eckert, Aubrey C.; Leone, Rosemary C.

Geologic Disposal Safety Assessment Framework is a state-of-the-art simulation software toolkit for probabilistic post-closure performance assessment of systems for deep geologic disposal of nuclear waste developed by the United States Department of Energy. This paper presents a generic reference case and shows how it is being used to develop and demonstrate performance assessment methods within the Geologic Disposal Safety Assessment Framework that mitigate some of the challenges posed by high uncertainty and limited computational resources. Variance-based global sensitivity analysis is applied to assess the effects of spatial heterogeneity using graph-based summary measures for scalar and time-varying quantities of interest. Behavior of the system with respect to spatial heterogeneity is further investigated using ratios of water fluxes. This analysis shows that spatial heterogeneity is a dominant uncertainty in predictions of repository performance which can be identified in global sensitivity analysis using proxy variables derived from graph descriptions of discrete fracture networks. New quantities of interest defined using water fluxes proved useful for better understanding overall system behavior.

More Details

Accelerating Multiscale Materials Modeling with Machine Learning

Modine, N.A.; Stephens, John A.; Swiler, Laura P.; Thompson, Aidan P.; Vogel, Dayton J.; Cangi, Attila C.; Feilder, Lenz F.; Rajamanickam, Sivasankaran R.

The focus of this project is to accelerate and transform the workflow of multiscale materials modeling by developing an integrated toolchain seamlessly combining DFT, SNAP, LAMMPS, (shown in Figure 1-1) and a machine-learning (ML) model that will more efficiently extract information from a smaller set of first-principles calculations. Our ML model enables us to accelerate first-principles data generation by interpolating existing high fidelity data, and extend the simulation scale by extrapolating high fidelity data (102 atoms) to the mesoscale (104 atoms). It encodes the underlying physics of atomic interactions on the microscopic scale by adapting a variety of ML techniques such as deep neural networks (DNNs), and graph neural networks (GNNs). We developed a new surrogate model for density functional theory using deep neural networks. The developed ML surrogate is demonstrated in a workflow to generate accurate band energies, total energies, and density of the 298K and 933K Aluminum systems. Furthermore, the models can be used to predict the quantities of interest for systems with more number of atoms than the training data set. We have demonstrated that the ML model can be used to compute the quantities of interest for systems with 100,000 Al atoms. When compared with 2000 Al system the new surrogate model is as accurate as DFT, but three orders of magnitude faster. We also explored optimal experimental design techniques to choose the training data and novel Graph Neural Networks to train on smaller data sets. These are promising methods that need to be explored in the future.

More Details

Uncertainty and Sensitivity Analysis Methods and Applications in the GDSA Framework (FY2022)

Swiler, Laura P.; Basurto, Eduardo B.; Brooks, Dusty M.; Eckert, Aubrey C.; Leone, Rosemary C.; Mariner, Paul M.; Portone, Teresa P.; Smith, Mariah L.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, GDSA Framework, for evaluating disposal system performance for nuclear waste in geologic media. GDSA Framework is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign.

More Details

Graph-Based Similarity Metrics for Comparing Simulation Model Causal Structures

Naugle, Asmeret B.; Swiler, Laura P.; Lakkaraju, Kiran L.; Verzi, Stephen J.; Warrender, Christina E.; Romero, Vicente J.

The causal structure of a simulation is a major determinant of both its character and behavior, yet most methods we use to compare simulations focus only on simulation outputs. We introduce a method that combines graphical representation with information theoretic metrics to quantitatively compare the causal structures of models. The method applies to agent-based simulations as well as system dynamics models and facilitates comparison within and between types. Comparing models based on their causal structures can illuminate differences in assumptions made by the models, allowing modelers to (1) better situate their models in the context of existing work, including highlighting novelty, (2) explicitly compare conceptual theory and assumptions to simulated theory and assumptions, and (3) investigate potential causal drivers of divergent behavior between models. We demonstrate the method by comparing two epidemiology models at different levels of aggregation.

More Details

The Ground Truth Program: Simulations as Test Beds for Social Science Research Methods.

Computational and Mathematical Organization Theory

Naugle, Asmeret B.; Russell, Adam R.; Lakkaraju, Kiran L.; Swiler, Laura P.; Verzi, Stephen J.; Romero, Vicente J.

Social systems are uniquely complex and difficult to study, but understanding them is vital to solving the world’s problems. The Ground Truth program developed a new way of testing the research methods that attempt to understand and leverage the Human Domain and its associated complexities. The program developed simulations of social systems as virtual world test beds. Not only were these simulations able to produce data on future states of the system under various circumstances and scenarios, but their causal ground truth was also explicitly known. Research teams studied these virtual worlds, facilitating deep validation of causal inference, prediction, and prescription methods. The Ground Truth program model provides a way to test and validate research methods to an extent previously impossible, and to study the intricacies and interactions of different components of research.

More Details

Dakota, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis (V.6.16 User's Manual)

Adams, Brian H.; Bohnhoff, William B.; Dalbey, Keith D.; Ebeida, Mohamed S.; Eddy, John E.; Eldred, Michael E.; Hooper, Russell H.; Hough, Patricia H.; Hu, Kenneth H.; Jakeman, John J.; Khalil, Mohammad K.; Maupin, Kathryn M.; Monschke, Jason A.; Ridgway, Elliott R.; Rushdi, Ahmad A.; Seidl, Daniel S.; Stephens, John A.; Swiler, Laura P.; Tran, Anh; Winokur, Justin W.

The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

More Details

GDSA Framework Development and Process Model Integration FY2021

Mariner, Paul M.; Berg, Timothy M.; Debusschere, Bert D.; Eckert, Aubrey C.; Harvey, Jacob H.; LaForce, Tara; Leone, Rosemary C.; Mills, Melissa M.; Nole, Michael A.; Park, Heeho D.; Perry, F.V.; Seidl, Daniel T.; Swiler, Laura P.; Chang, Kyung W.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and highlevel nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (DOE 2012, Table 6; Sevougian et al. 2019). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media.

More Details

Sensitivity Analysis Comparisons on Geologic Case Studies: An International Collaboration

Swiler, Laura P.; Becker, Dirk-Alexander B.; Brooks, Dusty M.; Govaerts, Joan G.; Koskinen, Lasse K.; Plischke, Elmar P.; Röhlig, Klaus-Jürgen R.; Saveleva, Elena S.; Spiessl, Sabine M.; Stein, Emily S.; Svitelman, Valentina S.

Over the past four years, an informal working group has developed to investigate existing sensitivity analysis methods, examine new methods, and identify best practices. The focus is on the use of sensitivity analysis in case studies involving geologic disposal of spent nuclear fuel or nuclear waste. To examine ideas and have applicable test cases for comparison purposes, we have developed multiple case studies. Four of these case studies are presented in this report: the GRS clay case, the SNL shale case, the Dessel case, and the IBRAE groundwater case. We present the different sensitivity analysis methods investigated by various groups, the results obtained by different groups and different implementations, and summarize our findings.

More Details

Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) (Final Report)

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She?ifa P.

This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.

More Details

Science & Engineering of Cyber Security by Uncertainty Quantification and Rigorous Experimentation (SECURE) HANDBOOK

Pinar, Ali P.; Tarman, Thomas D.; Swiler, Laura P.; Gearhart, Jared L.; Hart, Derek H.; Vugrin, Eric D.; Cruz, Gerardo C.; Arguello, Bryan A.; Geraci, Gianluca G.; Debusschere, Bert D.; Hanson, Seth T.; Outkin, Alexander V.; Thorpe, Jamie T.; Hart, William E.; Sahakian, Meghan A.; Gabert, Kasimir G.; Glatter, Casey J.; Johnson, Emma S.; Punla-Green, She?ifa P.

Abstract not provided.

White paper on Verification and Validation for Cyber Emulation Models

Swiler, Laura P.

All disciplines that use models to predict the behavior of real-world systems need to determine the accuracy of the models’ results. Techniques for verification, validation, and uncertainty quantification (VVUQ) focus on improving the credibility of computational models and assessing their predictive capability. VVUQ emphasizes rigorous evaluation of models and how they are applied to improve understanding of model limitations and quantify the accuracy of model predictions.

More Details

The Fingerprints of Stratospheric Aerosol Injection in E3SM

Wagman, Benjamin M.; Swiler, Laura P.; Chowdhary, Kamaljit S.; Hillman, Benjamin H.

The June 15, 1991 Mt. Pinatubo eruption is simulated in E3SM by injecting 10 Tg of SO2 gas in the stratosphere, turning off prescribed volcanic aerosols, and enabling E3SM to treat stratospheric volcanic aerosols prognostically. This experimental prognostic treatment of volcanic aerosols in the stratosphere results in some realistic behaviors (SO2 evolves into H2SO4 which heats the lower stratosphere), and some expected biases (H2SO4 aerosols sediment out of the stratosphere too quickly). Climate fingerprinting techniques are used to establish a Mt. Pinatubo fingerprint based on the vertical profile of temperature from the E3SMv1 DECK ensemble. By projecting reanalysis data and preindustrial simulations onto the fingerprint, the Mt. Pinatubo stratospheric heating anomaly is detected. Projecting the experimental prognostic aerosol simulation onto the fingerprint also results in a detectable heating anomaly, but, as expected, the duration is too short relative to reanalysis data.

More Details

Foundations of Rigorous Cyber Experimentation

Stickland, Michael S.; Li, Justin D.; Swiler, Laura P.; Tarman, Thomas D.

This report presents the results of the “Foundations of Rigorous Cyber Experimentation” (FORCE) Laboratory Directed Research and Development (LDRD) project. This project is a companion project to the “Science and Engineering of Cyber security through Uncertainty quantification and Rigorous Experimentation” (SECURE) Grand Challenge LDRD project. This project leverages the offline, controlled nature of cyber experimentation technologies in general, and emulation testbeds in particular, to assess how uncertainties in network conditions affect uncertainties in key metrics. We conduct extensive experimentation using a Firewheel emulation-based cyber testbed model of Invisible Internet Project (I2P) networks to understand a de-anonymization attack formerly presented in the literature. Our goals in this analysis are to see if we can leverage emulation testbeds to produce reliably repeatable experimental networks at scale, identify significant parameters influencing experimental results, replicate the previous results, quantify uncertainty associated with the predictions, and apply multi-fidelity techniques to forecast results to real-world network scales. The I2P networks we study are up to three orders of magnitude larger than the networks studied in SECURE and presented additional challenges to identify significant parameters. The key contributions of this project are the application of SECURE techniques such as UQ to a scenario of interest and scaling the SECURE techniques to larger network sizes. This report describes the experimental methods and results of these studies in more detail. In addition, the process of constructing these large-scale experiments tested the limits of the Firewheel emulation-based technologies. Therefore, another contribution of this work is that it informed the Firewheel developers of scaling limitations, which were subsequently corrected.

More Details

Uncertainty and Sensitivity Analysis Methods and Applications in the GDSA Framework (FY2021)

Swiler, Laura P.; Basurto, Eduardo B.; Brooks, Dusty M.; Eckert, Aubrey C.; Leone, Rosemary C.; Mariner, Paul M.; Portone, Teresa P.; Smith, Mariah L.; Stein, Emily S.

The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, GDSA Framework, for evaluating disposal system performance for nuclear waste in geologic media. GDSA Framework is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign. This report fulfills the GDSA Uncertainty and Sensitivity Analysis Methods work package (SF-21SN01030404) level 3 milestone, Uncertainty and Sensitivity Analysis Methods and Applications in GDSA Framework (FY2021) (M3SF-21SN010304042). It presents high level objectives and strategy for development of uncertainty and sensitivity analysis tools, demonstrates uncertainty quantification (UQ) and sensitivity analysis (SA) tools in GDSA Framework in FY21, and describes additional UQ/SA tools whose future implementation would enhance the UQ/SA capability of GDSA Framework. This work was closely coordinated with the other Sandia National Laboratory GDSA work packages: the GDSA Framework Development work package (SF-21SN01030405), the GDSA Repository Systems Analysis work package (SF-21SN01030406), and the GDSA PFLOTRAN Development work package (SF-21SN01030407). This report builds on developments reported in previous GDSA Framework milestones, particularly M3SF 20SN010304032.

More Details

Comparing reproduced cyber experimentation studies across different emulation testbeds

ACM International Conference Proceeding Series

Tarman, Thomas D.; Rollins, Trevor; Swiler, Laura P.; Cruz, Gerardo C.; Vugrin, Eric D.; Huang, Hao; Sahu, Abhijeet; Wlazlo, Patrick; Goulart, Ana; Davis, Kate

Cyber testbeds provide an important mechanism for experimentally evaluating cyber security performance. However, as an experimental discipline, reproducible cyber experimentation is essential to assure valid, unbiased results. Even minor differences in setup, configuration, and testbed components can have an impact on the experiments, and thus, reproducibility of results. This paper documents a case study in reproducing an earlier emulation study, with the reproduced emulation experiment conducted by a different research group on a different testbed. We describe lessons learned as a result of this process, both in terms of the reproducibility of the original study and in terms of the different testbed technologies used by both groups. This paper also addresses the question of how to compare results between two groups' experiments, identifying candidate metrics for comparison and quantifying the results in this reproduction study.

More Details

Validation Metrics for Fixed Effects and Mixed-Effects Calibration

Journal of Verification, Validation and Uncertainty Quantification

Porter, N.W.; Maupin, Kathryn A.; Swiler, Laura P.; Mousseau, Vincent A.

The modern scientific process often involves the development of a predictive computational model. To improve its accuracy, a computational model can be calibrated to a set of experimental data. A variety of validation metrics can be used to quantify this process. Some of these metrics have direct physical interpretations and a history of use, while others, especially those for probabilistic data, are more difficult to interpret. In this work, a variety of validation metrics are used to quantify the accuracy of different calibration methods. Frequentist and Bayesian perspectives are used with both fixed effects and mixed-effects statistical models. Through a quantitative comparison of the resulting distributions, the most accurate calibration method can be selected. Two examples are included which compare the results of various validation metrics for different calibration methods. It is quantitatively shown that, in the presence of significant laboratory biases, a fixed effects calibration is significantly less accurate than a mixed-effects calibration. This is because the mixed-effects statistical model better characterizes the underlying parameter distributions than the fixed effects model. The results suggest that validation metrics can be used to select the most accurate calibration model for a particular empirical model with corresponding experimental data.

More Details

Dakota-NAERM Integration

Swiler, Laura P.; Newman, Sarah N.; Staid, Andrea S.; Barrett, Emily B.

This report presents the results of a collaborative effort under the Verification, Validation, and Uncertainty Quantification (VVUQ) thrust area of the North American Energy Resilience Model (NAERM) program. The goal of the effort described in this report was to integrate the Dakota software with the NAERM software framework to demonstrate sensitivity analysis of a co-simulation for NAERM.

More Details

Exploration of multifidelity UQ sampling strategies for computer network applications

International Journal for Uncertainty Quantification

Geraci, Gianluca G.; Crussell, Jonathan C.; Swiler, Laura P.; Debusschere, Bert D.

Network modeling is a powerful tool to enable rapid analysis of complex systems that can be challenging to study directly using physical testing. Two approaches are considered: emulation and simulation. The former runs real software on virtualized hardware, while the latter mimics the behavior of network components and their interactions in software. Although emulation provides an accurate representation of physical networks, this approach alone cannot guarantee the characterization of the system under realistic operative conditions. Operative conditions for physical networks are often characterized by intrinsic variability (payload size, packet latency, etc.) or a lack of precise knowledge regarding the network configuration (bandwidth, delays, etc.); therefore uncertainty quantification (UQ) strategies should be also employed. UQ strategies require multiple evaluations of the system with a number of evaluation instances that roughly increases with the problem dimensionality, i.e., the number of uncertain parameters. It follows that a typical UQ workflow for network modeling based on emulation can easily become unattainable due to its prohibitive computational cost. In this paper, a multifidelity sampling approach is discussed and applied to network modeling problems. The main idea is to optimally fuse information coming from simulations, which are a low-fidelity version of the emulation problem of interest, in order to decrease the estimator variance. By reducing the estimator variance in a sampling approach it is usually possible to obtain more reliable statistics and therefore a more reliable system characterization. Several network problems of increasing difficulty are presented. For each of them, the performance of the multifidelity estimator is compared with respect to the single fidelity counterpart, namely, Monte Carlo sampling. For all the test problems studied in this work, the multifidelity estimator demonstrated an increased efficiency with respect to MC.

More Details

Sensitivity and Uncertainty Analysis of Generator Failures under Extreme Temperature Scenarios in Power Systems

Emery, Benjamin F.; Staid, Andrea S.; Swiler, Laura P.

This report summarizes work done under the Verification, Validation, and Uncertainty Quantification (VVUQ) thrust area of the North American Energy Resilience Model (NAERM) Program. The specific task of interest described in this report is focused on sensitivity analysis of scenarios involving failures of both wind turbines and thermal generators under extreme cold-weather temperature conditions as would be observed in a Polar Vortex event.

More Details

An active learning high-throughput microstructure calibration framework for solving inverse structure–process problems in materials informatics

Acta Materialia

Tran, Anh; Mitchell, John A.; Swiler, Laura P.; Wildey, Tim

Determining a process–structure–property relationship is the holy grail of materials science, where both computational prediction in the forward direction and materials design in the inverse direction are essential. Problems in materials design are often considered in the context of process–property linkage by bypassing the materials structure, or in the context of structure–property linkage as in microstructure-sensitive design problems. However, there is a lack of research effort in studying materials design problems in the context of process–structure linkage, which has a great implication in reverse engineering. In this work, given a target microstructure, we propose an active learning high-throughput microstructure calibration framework to derive a set of processing parameters, which can produce an optimal microstructure that is statistically equivalent to the target microstructure. The proposed framework is formulated as a noisy multi-objective optimization problem, where each objective function measures a deterministic or statistical difference of the same microstructure descriptor between a candidate microstructure and a target microstructure. Furthermore, to significantly reduce the physical waiting wall-time, we enable the high-throughput feature of the microstructure calibration framework by adopting an asynchronously parallel Bayesian optimization by exploiting high-performance computing resources. Case studies in additive manufacturing and grain growth are used to demonstrate the applicability of the proposed framework, where kinetic Monte Carlo (kMC) simulation is used as a forward predictive model, such that for a given target microstructure, the target processing parameters that produced this microstructure are successfully recovered.

More Details

Uncertainty analysis of Resource Demand Model for Covid-19

Swiler, Laura P.; Portone, Teresa P.; Beyeler, Walter E.

As part of the Department of Energy response to the novel coronavirus pandemic of 2020, a modeling effort was sponsored by the DOE Office of Science. One task of this modeling effort at Sandia was to develop a model to predict medical resource needs given various patient arrival scenarios. Resources needed include personnel resources (nurses, ICU nurses, physicians, respiratory therapists), fixed resources (regular or ICU beds and ventilators), and consumable resources (masks, gowns, gloves, face shields, sedatives). This report documents the uncertainty analysis that was performed on the resource model. The uncertainty analysis involved sampling 26 input parameters to the model. The sampling was performed conditional on the patient arrival streams that also were inputs to the model. These patient arrival streams were derived from various epidemiology models and had a significant effect on the projected resource needs. In this report, we document the sampling approach, the parameter ranges used, and the computational workflow necessary to perform large-scale uncertainty studies for every county and state in the United States.

More Details

Automated high-throughput tensile testing reveals stochastic process parameter sensitivity

Materials Science and Engineering: A

Heckman, Nathan H.; Ivanoff, Thomas I.; Roach, Ashley M.; Jared, Bradley H.; Tung, Daniel J.; Brown-Shaklee, Harlan J.; Huber, Todd H.; Saiz, David J.; Koepke, Joshua R.; Rodelas, Jeffrey R.; Madison, Jonathan D.; Salzbrenner, Bradley S.; Swiler, Laura P.; Jones, Reese E.; Boyce, Brad B.

The mechanical properties of additively manufactured metals tend to show high variability, due largely to the stochastic nature of defect formation during the printing process. This study seeks to understand how automated high throughput testing can be utilized to understand the variable nature of additively manufactured metals at different print conditions, and to allow for statistically meaningful analysis. This is demonstrated by analyzing how different processing parameters, including laser power, scan velocity, and scan pattern, influence the tensile behavior of additively manufactured stainless steel 316L utilizing a newly developed automated test methodology. Microstructural characterization through computed tomography and electron backscatter diffraction is used to understand some of the observed trends in mechanical behavior. Specifically, grain size and morphology are shown to depend on processing parameters and influence the observed mechanical behavior. In the current study, laser-powder bed fusion, also known as selective laser melting or direct metal laser sintering, is shown to produce 316L over a wide processing range without substantial detrimental effect on the tensile properties. Ultimate tensile strengths above 600 MPa, which are greater than that for typical wrought annealed 316L with similar grain sizes, and elongations to failure greater than 40% were observed. It is demonstrated that this process has little sensitivity to minor intentional or unintentional variations in laser velocity and power.

More Details

SECURE: An Evidence-based Approach to Cyber Experimentation

Proceedings - 2019 Resilience Week, RWS 2019

Pinar, Ali P.; Benz, Zachary O.; Castillo, Anya; Hart, Bill; Swiler, Laura P.; Tarman, Thomas D.

Securing cyber systems is of paramount importance, but rigorous, evidence-based techniques to support decision makers for high-consequence decisions have been missing. The need for bringing rigor into cybersecurity is well-recognized, but little progress has been made over the last decades. We introduce a new project, SECURE, that aims to bring more rigor into cyber experimentation. The core idea is to follow the footsteps of computational science and engineering and expand similar capabilities to support rigorous cyber experimentation. In this paper, we review the cyber experimentation process, present the research areas that underlie our effort, discuss the underlying research challenges, and report on our progress to date. This paper is based on work in progress, and we expect to have more complete results for the conference.

More Details

Progress in Deep Geologic Disposal Safety Assessment in the U.S. since 2010

Mariner, Paul M.; Connolly, Laura A.; Cunningham, Leigh C.; Debusschere, Bert D.; Dobson, David C.; Frederick, Jennifer M.; Hammond, Glenn E.; Jordan, Spencer H.; LaForce, Tara; Nole, Michael A.; Park, Heeho D.; Perry, Frank V.; Rogers, Ralph D.; Seidl, Daniel T.; Sevougian, Stephen D.; Stein, Emily S.; Swift, Peter N.; Swiler, Laura P.; Vo, Jonathan V.; Wallace, Michael G.

Abstract not provided.

Gaussian-Process-Driven Adaptive Sampling for Reduced-Order Modeling of Texture Effects in Polycrystalline Alpha-Ti

JOM

Tallman, Aaron E.; Stopka, Krzysztof S.; Swiler, Laura P.; Wang, Yan; Kalidindi, Surya R.; McDowell, David L.

Data-driven tools for finding structure–property (S–P) relations, such as the Materials Knowledge System (MKS) framework, can accelerate materials design, once the costly and technical calibration process has been completed. A three-model method is proposed to reduce the expense of S–P relation model calibration: (1) direct simulations are performed as per (2) a Gaussian process-based data collection model, to calibrate (3) an MKS homogenization model in an application to α-Ti. The new methods are compared favorably with expert texture selection on the performance of the so-calibrated MKS models. Benefits for the development of new and improved materials are discussed.

More Details

Validation Metrics for Deterministic and Probabilistic Data

Journal of Verification, Validation and Uncertainty Quantification

Maupin, Kathryn A.; Maupin, Kathryn A.; Swiler, Laura P.; Swiler, Laura P.; Porter, Nathan W.; Porter, Nathan W.

Computational modeling and simulation are paramount to modern science. Computational models often replace physical experiments that are prohibitively expensive, dangerous, or occur at extreme scales. Thus, it is critical that these models accurately represent and can be used as replacements for reality. This paper provides an analysis of metrics that may be used to determine the validity of a computational model. While some metrics have a direct physical meaning and a long history of use, others, especially those that compare probabilistic data, are more difficult to interpret. Furthermore, the process of model validation is often application-specific, making the procedure itself challenging and the results difficult to defend. We therefore provide guidance and recommendations as to which validation metric to use, as well as how to use and decipher the results. Furthermore an example is included that compares interpretations of various metrics and demonstrates the impact of model and experimental uncertainty on validation processes.

More Details

Exploration of multifidelity approaches for uncertainty quantification in network applications

Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering, UNCECOMP 2019

Geraci, Gianluca G.; Swiler, Laura P.; Crussell, Jonathan C.; Debusschere, Bert D.

Communication networks have evolved to a level of sophistication that requires computer models and numerical simulations to understand and predict their behavior. A network simulator is a software that enables the network designer to model several components of a computer network such as nodes, routers, switches and links and events such as data transmissions and packet errors in order to obtain device and network level metrics. Network simulations, as many other numerical approximations that model complex systems, are subject to the specification of parameters and operative conditions of the system. Very often the full characterization of the system and their input is not possible, therefore Uncertainty Quantification (UQ) strategies need to be deployed to evaluate the statistics of its response and behavior. UQ techniques, despite the advancements in the last two decades, still suffer in the presence of a large number of uncertain variables and when the regularity of the systems response cannot be guaranteed. In this context, multifidelity approaches have gained popularity in the UQ community recently due to their flexibility and robustness with respect to these challenges. The main idea behind these techniques is to extract information from a limited number of high-fidelity model realizations and complement them with a much larger number of a set of lower fidelity evaluations. The final result is an estimator with a much lower variance, i.e. a more accurate and reliable estimator can be obtained. In this contribution we investigate the possibility to deploy multifidelity UQ strategies to computer network analysis. Two numerical configurations are studied based on a simplified network with one client and one server. Preliminary results for these tests suggest that multifidelity sampling techniques might be used as effective tools for UQ tools in network applications.

More Details

Methods of sensitivity analysis in geologic disposal safety assessment (GDSA) framework

International High-Level Radioactive Waste Management 2019, IHLRWM 2019

Stein, Emily S.; Swiler, Laura P.; Sevougian, Stephen D.

Probabilistic simulations of the post-closure performance of a generic deep geologic repository for commercial spent nuclear fuel in shale host rock provide a test case for comparing sensitivity analysis methods available in Geologic Disposal Safety Assessment (GDSA) Framework, the U.S. Department of Energy's state-of-the-art toolkit for repository performance assessment. Simulations assume a thick low-permeability shale with aquifers (potential paths to the biosphere) above and below the host rock. Multi-physics simulations on the 7-million-cell grid are run in a high-performance computing environment with PFLOTRAN. Epistemic uncertain inputs include properties of the engineered and natural systems. The output variables of interest, maximum I-129 concentrations (independent of time) at observation points in the aquifers, vary over several orders of magnitude. Variance-based global sensitivity analyses (i.e., calculations of sensitivity indices) conducted with Dakota use polynomial chaos expansion (PCE) and Gaussian process (GP) surrogate models. Results of analyses conducted with raw output concentrations and with log-transformed output concentrations are compared. Using log-transformed concentrations results in larger sensitivity indices for more influential input variables, smaller sensitivity indices for less influential input variables, and more consistent values for sensitivity indices between methods (PCE and GP) and between analyses repeated with samples of different sizes.

More Details

Born Qualified Grand Challenge LDRD Final Report

Roach, R.A.; Argibay, Nicolas A.; Allen, Kyle M.; Balch, Dorian K.; Beghini, Lauren L.; Bishop, Joseph E.; Boyce, Brad B.; Brown, Judith A.; Burchard, Ross L.; Chandross, M.; Cook, Adam W.; DiAntonio, Christopher D.; Dressler, Amber D.; Forrest, Eric C.; Ford, Kurtis R.; Ivanoff, Thomas I.; Jared, Bradley H.; Johnson, Kyle J.; Kammler, Daniel K.; Koepke, Joshua R.; Kustas, Andrew K.; Lavin, Judith M.; Leathe, Nicholas L.; Lester, Brian T.; Madison, Jonathan D.; Mani, Seethambal S.; Martinez, Mario J.; Moser, Daniel M.; Rodgers, Theron R.; Seidl, Daniel T.; Brown-Shaklee, Harlan J.; Stanford, Joshua S.; Stender, Michael S.; Sugar, Joshua D.; Swiler, Laura P.; Taylor, Samantha T.; Trembacki, Bradley T.

This SAND report fulfills the final report requirement for the Born Qualified Grand Challenge LDRD. Born Qualified was funded from FY16-FY18 with a total budget of ~$13M over the 3 years of funding. Overall 70+ staff, Post Docs, and students supported this project over its lifetime. The driver for Born Qualified was using Additive Manufacturing (AM) to change the qualification paradigm for low volume, high value, high consequence, complex parts that are common in high-risk industries such as ND, defense, energy, aerospace, and medical. AM offers the opportunity to transform design, manufacturing, and qualification with its unique capabilities. AM is a disruptive technology, allowing the capability to simultaneously create part and material while tightly controlling and monitoring the manufacturing process at the voxel level, with the inherent flexibility and agility in printing layer-by-layer. AM enables the possibility of measuring critical material and part parameters during manufacturing, thus changing the way we collect data, assess performance, and accept or qualify parts. It provides an opportunity to shift from the current iterative design-build-test qualification paradigm using traditional manufacturing processes to design-by-predictivity where requirements are addressed concurrently and rapidly. The new qualification paradigm driven by AM provides the opportunity to predict performance probabilistically, to optimally control the manufacturing process, and to implement accelerated cycles of learning. Exploiting these capabilities to realize a new uncertainty quantification-driven qualification that is rapid, flexible, and practical is the focus of this effort.

More Details

Posters for AA/CE Reception

Kuether, Robert J.; Allensworth, Brooke M.; Backer, Adam B.; Chen, Elton Y.; Dingreville, Remi P.; Forrest, Eric C.; Knepper, Robert; Tappan, Alexander S.; Marquez, Michael P.; Vasiliauskas, Jonathan G.; Rupper, Stephen G.; Grant, Michael J.; Atencio, Lauren C.; Hipple, Tyler J.; Maes, Danae M.; Timlin, Jerilyn A.; Ma, Tian J.; Garcia, Rudy J.; Danford, Forest L.; Patrizi, Laura P.; Galasso, Jennifer G.; Draelos, Timothy J.; Gunda, Thushara G.; Venezuela, Otoniel V.; Brooks, Wesley A.; Anthony, Stephen M.; Carson, Bryan C.; Reeves, Michael J.; Roach, Matthew R.; Maines, Erin M.; Lavin, Judith M.; Whetten, Shaun R.; Swiler, Laura P.

Abstract not provided.

Data Analysis for the Born Qualified Grand LDRD Project

Swiler, Laura P.; van Bloemen Waanders, Bart G.; Jared, Bradley H.; Koepke, Joshua R.; Whetten, Shaun R.; Madison, Jonathan D.; Ivanoff, Thomas I.; Jackson, Olivia D.; Cook, Adam W.; Brown-Shaklee, Harlan J.; Kammler, Daniel K.; Johnson, Kyle J.; Ford, Kurtis R.; Bishop, Joseph E.; Roach, R.A.

This report summarizes the data analysis activities that were performed under the Born Qualified Grand Challenge Project from 2016 - 2018. It is meant to document the characterization of additively manufactured parts and processe s for this project as well as demonstrate and identify further analyses and data science that could be done relating material processes to microstructure to properties to performance.

More Details

Sample Generation for Nuclear Data

Swiler, Laura P.; Adams, Brian M.; Wieselquist, William W.

This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simiution) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices. The treatment allows one to assume the cross sections are distributed with a multivariate normal distribution, lognormal distribution, or truncated normal distribution.

More Details

Fast Approximate Union Volume in High Dimensions with Line Samples

Mitchell, Scott A.; Awad, Muhammad A.; Ebeida, Mohamed S.; Swiler, Laura P.

The classical problem of calculating the volume of the union of d-dimensional balls is known as "Union Volume." We present line-sampling approximation algorithms for Union Volume. Our methods may be extended to other Boolean operations, such as setminus; or to other shapes, such as hyper-rectangles. The deterministic, exact approaches for Union Volume do not scale well to high dimensions. However, we adapt several of these exact approaches to approximation algorithms based on sampling. We perform local sampling within each ball using lines. We have several variations, depending on how the overlapping volume is partitioned, and depending on whether radial, axis-aligned, or other line patterns are used. Our variations fall within the family of Monte Carlo sampling, and hence have about the same theoretical convergence rate, 1 /$\sqrt{M}$, where M is the number of samples. In our limited experiments, line-sampling proved more accurate per unit work than point samples, because a line sample provides more information, and the analytic equation for a sphere makes the calculation almost as fast. We performed a limited empirical study of the efficiency of these variations. We suggest a more extensive study for future work. We speculate that different ball arrangements, differentiated by the distribution of overlaps in terms of volume and degree, will benefit the most from patterns of line samples that preferentially capture those overlaps. Acknowledgement We thank Karl Bringman for explaining his BF-ApproxUnion (ApproxUnion) algorithm [3] to us. We thank Josiah Manson for pointing out that spoke darts oversample the center and we might get a better answer by uniform sampling. We thank Vijay Natarajan for suggesting random chord sampling. The authors are grateful to Brian Adams, Keith Dalbey, and Vicente Romero for useful technical discussions. This work was sponsored by the Laboratory Directed Research and Development (LDRD) Program at Sandia National Laboratories. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR), Applied Mathematics Program. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

More Details

Changing the Engineering Design & Qualification Paradigm in Component Design & Manufacturing (Born Qualified)

Roach, R.A.; Bishop, Joseph E.; Jared, Bradley H.; Keicher, David M.; Cook, Adam W.; Whetten, Shaun R.; Forrest, Eric C.; Stanford, Joshua S.; Boyce, Brad B.; Johnson, Kyle J.; Rodgers, Theron R.; Ford, Kurtis R.; Martinez, Mario J.; Moser, Daniel M.; van Bloemen Waanders, Bart G.; Chandross, M.; Abdeljawad, Fadi F.; Allen, Kyle M.; Stender, Michael S.; Beghini, Lauren L.; Swiler, Laura P.; Lester, Brian T.; Argibay, Nicolas A.; Brown-Shaklee, Harlan J.; Kustas, Andrew K.; Sugar, Joshua D.; Kammler, Daniel K.; Wilson, Mark A.

Abstract not provided.

Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota

Portone, Teresa P.; Niederhaus, John H.; Sanchez, Jason J.; Swiler, Laura P.

This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

More Details

Bayesian inversion of seismic and electromagnetic data for marine gas reservoir characterization using multi-chain Markov chain Monte Carlo sampling

Journal of Applied Geophysics

Ren, Huiying; Ray, Jaideep R.; Hou, Zhangshuan; Huang, Maoyi; Bao, Jie; Swiler, Laura P.

In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.

More Details

Treatment of Nuclear Data Covariance Information in Sample Generation

Swiler, Laura P.; Adams, Brian M.; Wieselquist, William W.

This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.

More Details

SAChES: Scalable Adaptive Chain-Ensemble Sampling

Swiler, Laura P.; Ray, Jaideep R.; Swiler, Laura P.; Ebeida, Mohamed S.; Huang, Maoyi H.; Hou, Zhangshuan H.; Bao, Jie B.; Ren, huiying R.

We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

More Details

Extreme-Value Statistics Reveal Rare Failure-Critical Defects in Additive Manufacturing

Advanced Engineering Materials

Boyce, Brad B.; Salzbrenner, Bradley S.; Rodelas, Jeffrey R.; Swiler, Laura P.; Madison, Jonathan D.; Jared, Bradley H.; Shen, Yu L.

Additive manufacturing enables the rapid, cost effective production of customized structural components. To fully capitalize on the agility of additive manufacturing, it is necessary to develop complementary high-throughput materials evaluation techniques. In this study, over 1000 nominally identical tensile tests are used to explore the effect of process variability on the mechanical property distributions of a precipitation hardened stainless steel produced by a laser powder bed fusion process, also known as direct metal laser sintering or selective laser melting. With this large dataset, rare defects are revealed that affect only ≈2% of the population, stemming from a single build lot of material. The rare defects cause a substantial loss in ductility and are associated with an interconnected network of porosity. The adoption of streamlined test methods will be paramount to diagnosing and mitigating such dangerous anomalies in future structural components.

More Details

Integration of Dakota into the NEAMS Workbench

Swiler, Laura P.; Lefebvre, Robert A.; Langley, Brandon R.; Thompson, Adam B.

This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on integrating Dakota into the NEAMS Workbench. The NEAMS Workbench, developed at Oak Ridge National Laboratory, is a new software framework that provides a graphical user interface, input file creation, parsing, validation, job execution, workflow management, and output processing for a variety of nuclear codes. Dakota is a tool developed at Sandia National Laboratories that provides a suite of uncertainty quantification and optimization algorithms. Providing Dakota within the NEAMS Workbench allows users of nuclear simulation codes to perform uncertainty and optimization studies on their nuclear codes from within a common, integrated environment. Details of the integration and parsing are provided, along with an example of Dakota running a sampling study on the fuels performance code, BISON, from within the NEAMS Workbench.

More Details

High-throughput stochastic tensile performance of additively manufactured stainless steel

Journal of Materials Processing Technology

Salzbrenner, Bradley S.; Rodelas, Jeffrey R.; Madison, Jonathan D.; Jared, Bradley H.; Swiler, Laura P.; Shen, Yu L.; Boyce, Brad B.

An adage within the Additive Manufacturing (AM) community is that “complexity is free”. Complicated geometric features that normally drive manufacturing cost and limit design options are not typically problematic in AM. While geometric complexity is usually viewed from the perspective of part design, this advantage of AM also opens up new options in rapid, efficient material property evaluation and qualification. In the current work, an array of 100 miniature tensile bars are produced and tested for a comparable cost and in comparable time to a few conventional tensile bars. With this technique, it is possible to evaluate the stochastic nature of mechanical behavior. The current study focuses on stochastic yield strength, ultimate strength, and ductility as measured by strain at failure (elongation). However, this method can be used to capture the statistical nature of many mechanical properties including the full stress-strain constitutive response, elastic modulus, work hardening, and fracture toughness. Moreover, the technique could extend to strain-rate and temperature dependent behavior. As a proof of concept, the technique is demonstrated on a precipitation hardened stainless steel alloy, commonly known as 17-4PH, produced by two commercial AM vendors using a laser powder bed fusion process, also commonly known as selective laser melting. Using two different commercial powder bed platforms, the vendors produced material that exhibited slightly lower strength and markedly lower ductility compared to wrought sheet. Moreover, the properties were much less repeatable in the AM materials as analyzed in the context of a Weibull distribution, and the properties did not consistently meet minimum allowable requirements for the alloy as established by AMS. The diminished, stochastic properties were examined in the context of major contributing factors such as surface roughness and internal lack-of-fusion porosity. This high-throughput capability is expected to be useful for follow-on extensive parametric studies of factors that affect the statistical reliability of AM components.

More Details

Recommended Research Directions for Improving the Validation of Complex Systems Models

Vugrin, Eric D.; Trucano, Timothy G.; Swiler, Laura P.; Finley, Patrick D.; Flanagan, Tatiana P.; Naugle, Asmeret B.; Tsao, Jeffrey Y.; Verzi, Stephen J.

More Details

POF-Darts: Geometric adaptive sampling for probability of failure

Reliability Engineering and System Safety

Ebeida, Mohamed S.; Mitchell, Scott A.; Swiler, Laura P.; Romero, Vicente J.; Rushdi, Ahmad A.

We introduce a novel technique, POF-Darts, to estimate the Probability Of Failure based on random disk-packing in the uncertain parameter space. POF-Darts uses hyperplane sampling to explore the unexplored part of the uncertain space. We use the function evaluation at a sample point to determine whether it belongs to failure or non-failure regions, and surround it with a protection sphere region to avoid clustering. We decompose the domain into Voronoi cells around the function evaluations as seeds and choose the radius of the protection sphere depending on the local Lipschitz continuity. As sampling proceeds, regions uncovered with spheres will shrink, improving the estimation accuracy. After exhausting the function evaluation budget, we build a surrogate model using the function evaluations associated with the sample points and estimate the probability of failure by exhaustive sampling of that surrogate. In comparison to other similar methods, our algorithm has the advantages of decoupling the sampling step from the surrogate construction one, the ability to reach target POF values with fewer samples, and the capability of estimating the number and locations of disconnected failure regions, not just the POF value. We present various examples to demonstrate the efficiency of our novel approach.

More Details

User Guidelines and Best Practices for CASL VUQ Analysis Using Dakota

Adams, Brian M.; Coleman, Kayla M.; Hooper, Russell H.; Khuwaileh, Bassam A.; Lewis, Allison L.; Smith, Ralph S.; Swiler, Laura P.; Turinsky, Paul J.; Williams, Brian W.

Sandia's Dakota software (available at http://dakota.sandia.gov) supports science and engineering transformation through advanced exploration of simulations. Specifically, it manages and analyzes ensembles of simulations to provide broader and deeper perspective for analysts and decision makers. This enables them to enhance understanding of risk, improve products, and assess simulation credibility. This manual offers Consortium for Advanced Simulation of Light Water Reactors (LWRs) (CASL) partners a guide to conducting Dakota-based VUQ studies for CASL problems. It motivates various classes of Dakota methods and includes examples of their use on representative application problems. On reading, a CASL analyst should understand why and how to apply Dakota to a simulation problem.

More Details

Sensitivity Analysis in Xyce

Keiter, Eric R.; Swiler, Laura P.; Russo, Thomas V.; Wilcox, Ian Z.

Parametric sensitivities of dynamic system responses are very useful in a variety of applications, including circuit optimization and uncertainty quantification. Sensitivity calculation methods fall into two related categories: direct and adjoint methods. Effective implementation of such methods in a production circuit simulator poses a number of technical challenges, including instrumentation of device models. This report documents several years of work developing and implementing di- rect and adjoint sensitivity methods in the Xyce circuit simulator. Much of this work sponsored by the Laboratory Directed Research and Development (LDRD) Program at Sandia National Labora- tories, under project LDRD 14-0788.

More Details

Advanced Uncertainty Quantification Methods for Circuit Simulation: Final Report LDRD 2016-0845

Keiter, Eric R.; Swiler, Laura P.; Wilcox, Ian Z.

This report summarizes the methods and algorithms that were developed on the Sandia National Laboratory LDRD project entitled "Advanced Uncertainty Quantification Methods for Circuit Sim- ulation", which was project # 173331 and proposal # 2016-0845. As much of our work has been published in other reports and publications, this report gives an brief summary. Those who are in- terested in the technical details are encouraged to read the full published results and also contact the report authors for the status of follow-on projects.

More Details

Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation & Uncertainty Quantification

Tsao, Jeffrey Y.; Trucano, Timothy G.; Kleban, S.D.; Naugle, Asmeret B.; Verzi, Stephen J.; Swiler, Laura P.; Johnson, Curtis M.; Smith, Mark A.; Flanagan, Tatiana P.; Vugrin, Eric D.; Gabert, Kasimir G.; Lave, Matthew S.; Chen, Wei C.; DeLaurentis, Daniel D.; Hubler, Alfred H.; Oberkampf, Bill O.

This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledge gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?

More Details

Uncertainty quantification and sensitivity analysis applications to fuel performance modeling

Top Fuel 2016: LWR Fuels with Enhanced Safety and Performance

Gamble, Kyle A.; Swiler, Laura P.

Best-estimate fuel performance codes such as BISON currently under development at the Idaho National Laboratory, utilize empirical and mechanistic lower-length-scale informed correlations to predict fuel behavior under normal operating and accident reactor conditions. Traditionally, best-estimate results are presented using the correlations with no quantification of the uncertainty in the output metrics of interest. However, there are associated uncertainties in the input parameters and correlations used to determine the behavior of the fuel and cladding under irradiation. Therefore, it is important to perform uncertainty quantification and include confidence bounds on the output metrics that take into account the uncertainties in the inputs. In addition, sensitivity analyses can be performed to determine which input parameters have the greatest influence on the outputs. In this paper we couple the BISON fuel performance code to the DAKOTA uncertainty analysis software to analyze a representative fuel performance problem. The case studied in this paper is based upon rod 1 from the IFA-432 integral experiment performed at the Halden Reactor in Norway. The rodlet is representative of a BWR fuel rod. The input parameters uncertainties are broken into three separate categories including boundary condition uncertainties (e.g., power, coolant flow rate), manufacturing uncertainties (e.g., pellet diameter, cladding thickness), and model uncertainties (e.g., fuel thermal conductivity, fuel swelling). Utilizing DAKOTA, a variety of statistical analysis techniques are applied to quantify the uncertainty and sensitivity of the output metrics of interest. Specifically, we demonstrate the use of sampling methods, polynomial chaos expansions, surrogate models, and variance-based decomposition. The output metrics investigated in this study are the fuel centerline temperature, cladding surface temperature, fission gas released, and fuel rod diameter. The results highlight the importance of quantifying the uncertainty and sensitivity in fuel performance modeling predictions and the need for additional research into improving the material models that are currently available.

More Details

Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

Mitchell, Scott A.; Ebeida, Mohamed S.; Romero, Vicente J.; Swiler, Laura P.; Rushdi, Ahmad A.; Abdelkader, Ahmad A.

This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

More Details

Sensitivity Analysis of OECD Benchmark Tests in BISON

Swiler, Laura P.; Gamble, Kyle G.; Schmidt, Rodney C.; Williamson, Richard W.

This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

More Details

Spectral neighbor analysis method for automated generation of quantum-accurate interatomic potentials

Journal of Computational Physics

Thompson, Aidan P.; Swiler, Laura P.; Trott, C.R.; Foiles, Stephen M.; Tucker, G.J.

We present a new interatomic potential for solids and liquids called Spectral Neighbor Analysis Potential (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected onto a basis of hyperspherical harmonics in four dimensions. The bispectrum components are the same bond-orientational order parameters employed by the GAP potential [1]. The SNAP potential, unlike GAP, assumes a linear relationship between atom energy and bispectrum components. The linear SNAP coefficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. We demonstrate that a previously unnoticed symmetry property can be exploited to reduce the computational cost of the force calculations by more than one order of magnitude. We present results for a SNAP potential for tantalum, showing that it accurately reproduces a range of commonly calculated properties of both the crystalline solid and the liquid phases. In addition, unlike simpler existing potentials, SNAP correctly predicts the energy barrier for screw dislocation migration in BCC tantalum.

More Details

Arctic Climate Systems Analysis

Ivey, Mark D.; Robinson, David G.; Boslough, Mark B.; Backus, George A.; Peterson, Kara J.; van Bloemen Waanders, Bart G.; Swiler, Laura P.; Desilets, Darin M.; Reinert, Rhonda K.

This study began with a challenge from program area managers at Sandia National Laboratories to technical staff in the energy, climate, and infrastructure security areas: apply a systems-level perspective to existing science and technology program areas in order to determine technology gaps, identify new technical capabilities at Sandia that could be applied to these areas, and identify opportunities for innovation. The Arctic was selected as one of these areas for systems level analyses, and this report documents the results. In this study, an emphasis was placed on the arctic atmosphere since Sandia has been active in atmospheric research in the Arctic since 1997. This study begins with a discussion of the challenges and benefits of analyzing the Arctic as a system. It goes on to discuss current and future needs of the defense, scientific, energy, and intelligence communities for more comprehensive data products related to the Arctic; assess the current state of atmospheric measurement resources available for the Arctic; and explain how the capabilities at Sandia National Laboratories can be used to address the identified technological, data, and modeling needs of the defense, scientific, energy, and intelligence communities for Arctic support.

More Details
Results 1–200 of 335
Results 1–200 of 335