Publications

Results 1–50 of 335
Skip to search filters

The Ground Truth Program: Simulations as Test Beds for Social Science Research Methods.

Computational and Mathematical Organization Theory

Naugle, Asmeret B.; Russell, Adam R.; Lakkaraju, Kiran L.; Swiler, Laura P.; Verzi, Stephen J.; Romero, Vicente J.

Social systems are uniquely complex and difficult to study, but understanding them is vital to solving the world’s problems. The Ground Truth program developed a new way of testing the research methods that attempt to understand and leverage the Human Domain and its associated complexities. The program developed simulations of social systems as virtual world test beds. Not only were these simulations able to produce data on future states of the system under various circumstances and scenarios, but their causal ground truth was also explicitly known. Research teams studied these virtual worlds, facilitating deep validation of causal inference, prediction, and prescription methods. The Ground Truth program model provides a way to test and validate research methods to an extent previously impossible, and to study the intricacies and interactions of different components of research.

More Details

A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

Reliability Engineering and System Safety

Groth, Katrina G.; Swiler, Laura P.; Adams, Susan S.

In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

More Details

A comparison of methods for representing sparsely sampled random quantities

Romero, Vicente J.; Swiler, Laura P.; Urbina, Angel U.

This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

More Details

A user's guide to Sandia's latin hypercube sampling software : LHS UNIX library/standalone version

Swiler, Laura P.; Wyss, Gregory D.

This document is a reference guide for the UNIX Library/Standalone version of the Latin Hypercube Sampling Software. This software has been developed to generate Latin hypercube multivariate samples. This version runs on Linux or UNIX platforms. This manual covers the use of the LHS code in a UNIX environment, run either as a standalone program or as a callable library. The underlying code in the UNIX Library/Standalone version of LHS is almost identical to the updated Windows version of LHS released in 1998 (SAND98-0210). However, some modifications were made to customize it for a UNIX environment and as a library that is called from the DAKOTA environment. This manual covers the use of the LHS code as a library and in the standalone mode under UNIX.

More Details

Accelerating Multiscale Materials Modeling with Machine Learning

Modine, N.A.; Stephens, John A.; Swiler, Laura P.; Thompson, Aidan P.; Vogel, Dayton J.; Cangi, Attila C.; Feilder, Lenz F.; Rajamanickam, Sivasankaran R.

The focus of this project is to accelerate and transform the workflow of multiscale materials modeling by developing an integrated toolchain seamlessly combining DFT, SNAP, LAMMPS, (shown in Figure 1-1) and a machine-learning (ML) model that will more efficiently extract information from a smaller set of first-principles calculations. Our ML model enables us to accelerate first-principles data generation by interpolating existing high fidelity data, and extend the simulation scale by extrapolating high fidelity data (102 atoms) to the mesoscale (104 atoms). It encodes the underlying physics of atomic interactions on the microscopic scale by adapting a variety of ML techniques such as deep neural networks (DNNs), and graph neural networks (GNNs). We developed a new surrogate model for density functional theory using deep neural networks. The developed ML surrogate is demonstrated in a workflow to generate accurate band energies, total energies, and density of the 298K and 933K Aluminum systems. Furthermore, the models can be used to predict the quantities of interest for systems with more number of atoms than the training data set. We have demonstrated that the ML model can be used to compute the quantities of interest for systems with 100,000 Al atoms. When compared with 2000 Al system the new surrogate model is as accurate as DFT, but three orders of magnitude faster. We also explored optimal experimental design techniques to choose the training data and novel Graph Neural Networks to train on smaller data sets. These are promising methods that need to be explored in the future.

More Details

Advanced Uncertainty Quantification Methods for Circuit Simulation: Final Report LDRD 2016-0845

Keiter, Eric R.; Swiler, Laura P.; Wilcox, Ian Z.

This report summarizes the methods and algorithms that were developed on the Sandia National Laboratory LDRD project entitled "Advanced Uncertainty Quantification Methods for Circuit Sim- ulation", which was project # 173331 and proposal # 2016-0845. As much of our work has been published in other reports and publications, this report gives an brief summary. Those who are in- terested in the technical details are encouraged to read the full published results and also contact the report authors for the status of follow-on projects.

More Details

Algorithm development for Prognostics and Health Management (PHM)

Swiler, Laura P.; Swiler, Laura P.; Campbell, James E.; Lowder, Kelly S.; Doser, Adele D.

This report summarizes the results of a three-year LDRD project on prognostics and health management. System failure over some future time interval (an alternative definition is the capability to predict the remaining useful life of a system). Prognostics are integrated with health monitoring (through inspections, sensors, etc.) to provide an overall PHM capability that optimizes maintenance actions and results in higher availability at a lower cost. Our goal in this research was to develop PHM tools that could be applied to a wide variety of equipment (repairable, non-repairable, manufacturing, weapons, battlefield equipment, etc.) and require minimal customization to move from one system to the next. Thus, our approach was to develop a toolkit of reusable software objects/components and architecture for their use. We have developed two software tools: an Evidence Engine and a Consequence Engine. The Evidence Engine integrates information from a variety of sources in order to take into account all the evidence that impacts a prognosis for system health. The Evidence Engine has the capability for feature extraction, trend detection, information fusion through Bayesian Belief Networks (BBN), and estimation of remaining useful life. The Consequence Engine involves algorithms to analyze the consequences of various maintenance actions. The Consequence Engine takes as input a maintenance and use schedule, spares information, and time-to-failure data on components, then generates maintenance and failure events, and evaluates performance measures such as equipment availability, mission capable rate, time to failure, and cost. This report summarizes the capabilities we have developed, describes the approach and architecture of the two engines, and provides examples of their use. 'Prognostics' refers to the capability to predict the probability of

More Details

An active learning high-throughput microstructure calibration framework for solving inverse structure–process problems in materials informatics

Acta Materialia

Tran, Anh; Mitchell, John A.; Swiler, Laura P.; Wildey, Tim

Determining a process–structure–property relationship is the holy grail of materials science, where both computational prediction in the forward direction and materials design in the inverse direction are essential. Problems in materials design are often considered in the context of process–property linkage by bypassing the materials structure, or in the context of structure–property linkage as in microstructure-sensitive design problems. However, there is a lack of research effort in studying materials design problems in the context of process–structure linkage, which has a great implication in reverse engineering. In this work, given a target microstructure, we propose an active learning high-throughput microstructure calibration framework to derive a set of processing parameters, which can produce an optimal microstructure that is statistically equivalent to the target microstructure. The proposed framework is formulated as a noisy multi-objective optimization problem, where each objective function measures a deterministic or statistical difference of the same microstructure descriptor between a candidate microstructure and a target microstructure. Furthermore, to significantly reduce the physical waiting wall-time, we enable the high-throughput feature of the microstructure calibration framework by adopting an asynchronously parallel Bayesian optimization by exploiting high-performance computing resources. Case studies in additive manufacturing and grain growth are used to demonstrate the applicability of the proposed framework, where kinetic Monte Carlo (kMC) simulation is used as a forward predictive model, such that for a given target microstructure, the target processing parameters that produced this microstructure are successfully recovered.

More Details

An initial comparison of methods for representing and aggregating experimental uncertainties involving sparse data

Collection of Technical Papers - AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference

Romero, Vicente J.; Swiler, Laura P.; Urbina, Angel U.

This paper discusses the handling and treatment of uncertainties corresponding to relatively few data samples in experimental characterization of random quantities. The importance of this topic extends beyond experimental uncertainty to situations where the derived experimental information is used for model validation or calibration. With very sparse data it is not practical to have a goal of accurately estimating the underlying variability distribution (probability density function, PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a desired percentage of the actual PDF, say 95% included probability, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the random-variable range corresponding to the desired percentage of the actual PDF. The performance of a variety of uncertainty representation techniques is tested and characterized in this paper according to these two opposing objectives. An initial set of test problems and results is presented here from a larger study currently underway.

More Details

Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota

Portone, Teresa P.; Niederhaus, John H.; Sanchez, Jason J.; Swiler, Laura P.

This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

More Details

Application of finite element, global polynomial, and kriging response surfaces in Progressive Lattice Sampling designs

Romero, Vicente J.; Swiler, Laura P.; Giunta, Anthony A.

This paper examines the modeling accuracy of finite element interpolation, kriging, and polynomial regression used in conjunction with the Progressive Lattice Sampling (PLS) incremental design-of-experiments approach. PLS is a paradigm for sampling a deterministic hypercubic parameter space by placing and incrementally adding samples in a manner intended to maximally reduce lack of knowledge in the parameter space. When combined with suitable interpolation methods, PLS is a formulation for progressive construction of response surface approximations (RSA) in which the RSA are efficiently upgradable, and upon upgrading, offer convergence information essential in estimating error introduced by the use of RSA in the problem. The three interpolation methods tried here are examined for performance in replicating an analytic test function as measured by several different indicators. The process described here provides a framework for future studies using other interpolation schemes, test functions, and measures of approximation quality.

More Details

Arctic Climate Systems Analysis

Ivey, Mark D.; Robinson, David G.; Boslough, Mark B.; Backus, George A.; Peterson, Kara J.; van Bloemen Waanders, Bart G.; Swiler, Laura P.; Desilets, Darin M.; Reinert, Rhonda K.

This study began with a challenge from program area managers at Sandia National Laboratories to technical staff in the energy, climate, and infrastructure security areas: apply a systems-level perspective to existing science and technology program areas in order to determine technology gaps, identify new technical capabilities at Sandia that could be applied to these areas, and identify opportunities for innovation. The Arctic was selected as one of these areas for systems level analyses, and this report documents the results. In this study, an emphasis was placed on the arctic atmosphere since Sandia has been active in atmospheric research in the Arctic since 1997. This study begins with a discussion of the challenges and benefits of analyzing the Arctic as a system. It goes on to discuss current and future needs of the defense, scientific, energy, and intelligence communities for more comprehensive data products related to the Arctic; assess the current state of atmospheric measurement resources available for the Arctic; and explain how the capabilities at Sandia National Laboratories can be used to address the identified technological, data, and modeling needs of the defense, scientific, energy, and intelligence communities for Arctic support.

More Details

Automated Algorithms for Quantum-Level Accuracy in Atomistic Simulations: LDRD Final Report

Thompson, Aidan P.; Schultz, Peter A.; Crozier, Paul C.; Moore, Stan G.; Swiler, Laura P.; Stephens, John A.; Trott, Christian R.; Foiles, Stephen M.; Tucker, Garritt J.

This report summarizes the result of LDRD project 12-0395, titled "Automated Algorithms for Quantum-level Accuracy in Atomistic Simulations." During the course of this LDRD, we have developed an interatomic potential for solids and liquids called Spectral Neighbor Analysis Poten- tial (SNAP). The SNAP potential has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The SNAP coef- ficients are determined using weighted least-squares linear regression against the full QM training set. This allows the SNAP potential to be fit in a robust, automated manner to large QM data sets using many bispectrum components. The calculation of the bispectrum components and the SNAP potential are implemented in the LAMMPS parallel molecular dynamics code. Global optimization methods in the DAKOTA software package are used to seek out good choices of hyperparameters that define the overall structure of the SNAP potential. FitSnap.py, a Python-based software pack- age interfacing to both LAMMPS and DAKOTA is used to formulate the linear regression problem, solve it, and analyze the accuracy of the resultant SNAP potential. We describe a SNAP potential for tantalum that accurately reproduces a variety of solid and liquid properties. Most significantly, in contrast to existing tantalum potentials, SNAP correctly predicts the Peierls barrier for screw dislocation motion. We also present results from SNAP potentials generated for indium phosphide (InP) and silica (SiO 2 ). We describe efficient algorithms for calculating SNAP forces and energies in molecular dynamics simulations using massively parallel computers and advanced processor ar- chitectures. Finally, we briefly describe the MSM method for efficient calculation of electrostatic interactions on massively parallel computers.

More Details
Results 1–50 of 335
Results 1–50 of 335