Simulating subsurface contaminant transport at the kilometer-scale often entails modeling reactive flow and transport within and through complex geologic structures. These structures are typically meshed by hand and as a result geologic structure is usually represented by one or a few deterministically generated geological models for uncertainty studies of flow and transport in the subsurface. Uncertainty in geologic structure can have a significant impact on contaminant transport. In this study, the impact of geologic structure on contaminant tracer transport in a shale formation is investigated for a simplified generic deep geologic repository for permanent disposal of spent nuclear fuel. An open-source modeling framework is used to perform a sensitivity analysis study on transport of two tracers from a generic spent nuclear fuel repository with uncertain location of the interfaces between the stratum of the geologic structure. The automated workflow uses sampled realizations of the geological structural model in addition to uncertain flow parameters in a nested sensitivity analysis. Concentration of the tracers at observation points within, in line with, and downstream of the repository are used as the quantities of interest for determining model sensitivity to input parameters and geological realization. Finally, the results of the study indicate that the location of strata interfaces in the geological structure has a first-order impact on tracer transport in the example shale formation, and that this impact may be greater than that of the uncertain flow parameters.
This paper details a computational framework to produce automated, graphical workflows, and how this framework can be deployed to support complex modeling problems like those in nuclear engineering. Key benefits of the framework include: automating previously manual workflows; intuitive construction and communication of workflows through a graphical interface; and automated file transfer and handling for workflows deployed across heterogeneous computing resources. This paper demonstrates the framework's application to probabilistic post-closure performance assessment of systems for deep geologic disposal of nuclear waste. However, the framework is a general capability that can help users running a variety of computational studies.
Causal discovery algorithms construct hypothesized causal graphs that depict causal dependencies among variables in observational data. While powerful, the accuracy of these algorithms is highly sensitive to the underlying dynamics of the system in ways that have not been fully characterized in the literature. In this report, we benchmark the PCMCI causal discovery algorithm in its application to gridded spatiotemporal systems. Effectively computing grid-level causal graphs on large grids will enable analysis of the causal impacts of transient and mobile spatial phenomena in large systems, such as the Earth’s climate. We evaluate the performance of PCMCI with a set of structural causal models, using simulated spatial vector autoregressive processes in one- and two-dimensions. We develop computational and analytical tools for characterizing these processes and their associated causal graphs. Our findings suggest that direct application of PCMCI is not suitable for the analysis of dynamical spatiotemporal gridded systems, such as climatological data, without significant preprocessing and downscaling of the data. PCMCI requires unrealistic sample sizes to achieve acceptable performance on even modestly sized problems and suffers from a notable curse of dimensionality. This work suggests that, even under generous structural assumptions, significant additional algorithmic improvements are needed before causal discovery algorithms can be reliably applied to grid-level outputs of earth system models.
The ground truth program used simulations as test beds for social science research methods. The simulations had known ground truth and were capable of producing large amounts of data. This allowed research teams to run experiments and ask questions of these simulations similar to social scientists studying real-world systems, and enabled robust evaluation of their causal inference, prediction, and prescription capabilities. We tested three hypotheses about research effectiveness using data from the ground truth program, specifically looking at the influence of complexity, causal understanding, and data collection on performance. We found some evidence that system complexity and causal understanding influenced research performance, but no evidence that data availability contributed. The ground truth program may be the first robust coupling of simulation test beds with an experimental framework capable of teasing out factors that determine the success of social science research.
Measures of simulation model complexity generally focus on outputs; we propose measuring the complexity of a model’s causal structure to gain insight into its fundamental character. This article introduces tools for measuring causal complexity. First, we introduce a method for developing a model’s causal structure diagram, which characterises the causal interactions present in the code. Causal structure diagrams facilitate comparison of simulation models, including those from different paradigms. Next, we develop metrics for evaluating a model’s causal complexity using its causal structure diagram. We discuss cyclomatic complexity as a measure of the intricacy of causal structure and introduce two new metrics that incorporate the concept of feedback, a fundamental component of causal structure. The first new metric introduced here is feedback density, a measure of the cycle-based interconnectedness of causal structure. The second metric combines cyclomatic complexity and feedback density into a comprehensive causal complexity measure. Finally, we demonstrate these complexity metrics on simulation models from multiple paradigms and discuss potential uses and interpretations. These tools enable direct comparison of models across paradigms and provide a mechanism for measuring and discussing complexity based on a model’s fundamental assumptions and design.
Spent nuclear fuel repository simulations are currently not able to incorporate detailed fuel matrix degradation (FMD) process models due to their computational cost, especially when large numbers of waste packages breach. The current paper uses machine learning to develop artificial neural network and k-nearest neighbor regression surrogate models that approximate the detailed FMD process model while being computationally much faster to evaluate. Using fuel cask temperature, dose rate, and the environmental concentrations of CO32−, O2, Fe2+, and H2 as inputs, these surrogates show good agreement with the FMD process model predictions of the UO2 degradation rate for conditions within the range of the training data. A demonstration in a full-scale shale repository reference case simulation shows that the incorporation of the surrogate models captures local and temporal environmental effects on fuel degradation rates while retaining good computational efficiency.
Geologic Disposal Safety Assessment Framework is a state-of-the-art simulation software toolkit for probabilistic post-closure performance assessment of systems for deep geologic disposal of nuclear waste developed by the United States Department of Energy. This paper presents a generic reference case and shows how it is being used to develop and demonstrate performance assessment methods within the Geologic Disposal Safety Assessment Framework that mitigate some of the challenges posed by high uncertainty and limited computational resources. Variance-based global sensitivity analysis is applied to assess the effects of spatial heterogeneity using graph-based summary measures for scalar and time-varying quantities of interest. Behavior of the system with respect to spatial heterogeneity is further investigated using ratios of water fluxes. This analysis shows that spatial heterogeneity is a dominant uncertainty in predictions of repository performance which can be identified in global sensitivity analysis using proxy variables derived from graph descriptions of discrete fracture networks. New quantities of interest defined using water fluxes proved useful for better understanding overall system behavior.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (Sassani et al. 2021). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media. This report describes fiscal year (FY) 2022 advances of the Geologic Disposal Safety Assessment (GDSA) performance assessment (PA) development groups of the SFWST Campaign. The common mission of these groups is to develop a geologic disposal system modeling capability for nuclear waste that can be used to assess probabilistically the performance of generic disposal options and generic sites. The modeling capability under development is called GDSA Framework (pa.sandia.gov). GDSA Framework is a coordinated set of codes and databases designed for probabilistically simulating the release and transport of disposed radionuclides from a repository to the biosphere for post-closure performance assessment. Primary components of GDSA Framework include PFLOTRAN to simulate the major features, events, and processes (FEPs) over time, Dakota to propagate uncertainty and analyze sensitivities, meshing codes to define the domain, and various other software for rendering properties, processing data, and visualizing results.
The focus of this project is to accelerate and transform the workflow of multiscale materials modeling by developing an integrated toolchain seamlessly combining DFT, SNAP, LAMMPS, (shown in Figure 1-1) and a machine-learning (ML) model that will more efficiently extract information from a smaller set of first-principles calculations. Our ML model enables us to accelerate first-principles data generation by interpolating existing high fidelity data, and extend the simulation scale by extrapolating high fidelity data (102 atoms) to the mesoscale (104 atoms). It encodes the underlying physics of atomic interactions on the microscopic scale by adapting a variety of ML techniques such as deep neural networks (DNNs), and graph neural networks (GNNs). We developed a new surrogate model for density functional theory using deep neural networks. The developed ML surrogate is demonstrated in a workflow to generate accurate band energies, total energies, and density of the 298K and 933K Aluminum systems. Furthermore, the models can be used to predict the quantities of interest for systems with more number of atoms than the training data set. We have demonstrated that the ML model can be used to compute the quantities of interest for systems with 100,000 Al atoms. When compared with 2000 Al system the new surrogate model is as accurate as DFT, but three orders of magnitude faster. We also explored optimal experimental design techniques to choose the training data and novel Graph Neural Networks to train on smaller data sets. These are promising methods that need to be explored in the future.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, GDSA Framework, for evaluating disposal system performance for nuclear waste in geologic media. GDSA Framework is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign.
Virtual machine emulation environments provide ideal testbeds for cybersecurity evaluations because they run real software binaries in a scalable, offline test setting that is suitable for assessing the impacts of software security flaws on the system. Verification of such emulations determines whether the environment is working as intended. Verification can focus on various aspects such as timing realism, traffic realism, and resource realism. In this paper, we study resource realism and issues associated with virtual machine resource utilization. We examine telemetry metrics gathered from a series of structured experiments which involve large numbers of parallel emulations meant to oversubscribe resources at some point. We present an approach to use telemetry metrics for emulation verification, and we demonstrate this approach on two cyber scenarios. Descriptions of the experimental configurations are provided along with a detailed discussion of statistical tests used to compare telemetry metrics. Results demonstrate the potential for a structured experimental framework, combined with statistical analysis of telemetry metrics, to support emulation verification. We conclude with comments on generalizability and potential future work.
The causal structure of a simulation is a major determinant of both its character and behavior, yet most methods we use to compare simulations focus only on simulation outputs. We introduce a method that combines graphical representation with information theoretic metrics to quantitatively compare the causal structures of models. The method applies to agent-based simulations as well as system dynamics models and facilitates comparison within and between types. Comparing models based on their causal structures can illuminate differences in assumptions made by the models, allowing modelers to (1) better situate their models in the context of existing work, including highlighting novelty, (2) explicitly compare conceptual theory and assumptions to simulated theory and assumptions, and (3) investigate potential causal drivers of divergent behavior between models. We demonstrate the method by comparing two epidemiology models at different levels of aggregation.
Social systems are uniquely complex and difficult to study, but understanding them is vital to solving the world’s problems. The Ground Truth program developed a new way of testing the research methods that attempt to understand and leverage the Human Domain and its associated complexities. The program developed simulations of social systems as virtual world test beds. Not only were these simulations able to produce data on future states of the system under various circumstances and scenarios, but their causal ground truth was also explicitly known. Research teams studied these virtual worlds, facilitating deep validation of causal inference, prediction, and prescription methods. The Ground Truth program model provides a way to test and validate research methods to an extent previously impossible, and to study the intricacies and interactions of different components of research.
We develop a framework for Gaussian processes regression constrained by boundary value problems. The framework may be applied to infer the solution of a well-posed boundary value problem with a known second-order differential operator and boundary conditions, but for which only scattered observations of the source term are available. Scattered observations of the solution may also be used in the regression. The framework combines co-kriging with the linear transformation of a Gaussian process together with the use of kernels given by spectral expansions in eigenfunctions of the boundary value problem. Thus, it benefits from a reduced-rank property of covariance matrices. We demonstrate that the resulting framework yields more accurate and stable solution inference as compared to physics-informed Gaussian process regression without boundary condition constraints.
This paper applies sensitivity and uncertainty analysis to compare two model alternatives for fuel matrix degradation for performance assessment of a generic crystalline repository. The results show that this model choice has little effect on uncertainty in the peak 129I concentration. The small impact of this choice is likely due to the higher importance of uncertainty in the instantaneous release fraction and differences in epistemic uncertainty between the alternatives.
This paper applies sensitivity and uncertainty analysis to compare two model alternatives for fuel matrix degradation for performance assessment of a generic crystalline repository. The results show that this model choice has little effect on uncertainty in the peak 129I concentration. The small impact of this choice is likely due to the higher importance of uncertainty in the instantaneous release fraction and differences in epistemic uncertainty between the alternatives.
Adams, Brian M.; Bohnhoff, William J.; Dalbey, Keith R.; Ebeida, Mohamed S.; Eddy, John P.; Eldred, Michael S.; Hooper, Russell W.; Hough, Patricia D.; Hu, Kenneth T.; Jakeman, John D.; Khalil, Mohammad; Maupin, Kathryn A.; Monschke, Jason A.; Ridgway, Elliott M.; Rushdi, Ahmad A.; Seidl, Daniel T.; Stephens, John A.; Swiler, Laura P.; Foulk, James W.; Winokur, Justin G.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and highlevel nuclear waste (HLW). A high priority for SFWST disposal R&D is disposal system modeling (DOE 2012, Table 6; Sevougian et al. 2019). The SFWST Geologic Disposal Safety Assessment (GDSA) work package is charged with developing a disposal system modeling and analysis capability for evaluating generic disposal system performance for nuclear waste in geologic media.
Swiler, Laura P.; Becker, Dirk-Alexander; Brooks, Dusty M.; Govaerts, Joan; Koskinen, Lasse; Plischke, Elmar; Rohlig, Klaus-Jurgen; Saveleva, Elena; Spiessl, Sabine M.; Stein, Emily; Svitelman, Valentina
Over the past four years, an informal working group has developed to investigate existing sensitivity analysis methods, examine new methods, and identify best practices. The focus is on the use of sensitivity analysis in case studies involving geologic disposal of spent nuclear fuel or nuclear waste. To examine ideas and have applicable test cases for comparison purposes, we have developed multiple case studies. Four of these case studies are presented in this report: the GRS clay case, the SNL shale case, the Dessel case, and the IBRAE groundwater case. We present the different sensitivity analysis methods investigated by various groups, the results obtained by different groups and different implementations, and summarize our findings.
The June 15, 1991 Mt. Pinatubo eruption is simulated in E3SM by injecting 10 Tg of SO2 gas in the stratosphere, turning off prescribed volcanic aerosols, and enabling E3SM to treat stratospheric volcanic aerosols prognostically. This experimental prognostic treatment of volcanic aerosols in the stratosphere results in some realistic behaviors (SO2 evolves into H2SO4 which heats the lower stratosphere), and some expected biases (H2SO4 aerosols sediment out of the stratosphere too quickly). Climate fingerprinting techniques are used to establish a Mt. Pinatubo fingerprint based on the vertical profile of temperature from the E3SMv1 DECK ensemble. By projecting reanalysis data and preindustrial simulations onto the fingerprint, the Mt. Pinatubo stratospheric heating anomaly is detected. Projecting the experimental prognostic aerosol simulation onto the fingerprint also results in a detectable heating anomaly, but, as expected, the duration is too short relative to reanalysis data.
This report summarizes the activities performed as part of the Science and Engineering of Cybersecurity by Uncertainty quantification and Rigorous Experimentation (SECURE) Grand Challenge LDRD project. We provide an overview of the research done in this project, including work on cyber emulation, uncertainty quantification, and optimization. We present examples of integrated analyses performed on two case studies: a network scanning/detection study and a malware command and control study. We highlight the importance of experimental workflows and list references of papers and presentations developed under this project. We outline lessons learned and suggestions for future work.
All disciplines that use models to predict the behavior of real-world systems need to determine the accuracy of the models’ results. Techniques for verification, validation, and uncertainty quantification (VVUQ) focus on improving the credibility of computational models and assessing their predictive capability. VVUQ emphasizes rigorous evaluation of models and how they are applied to improve understanding of model limitations and quantify the accuracy of model predictions.
This report presents the results of the “Foundations of Rigorous Cyber Experimentation” (FORCE) Laboratory Directed Research and Development (LDRD) project. This project is a companion project to the “Science and Engineering of Cyber security through Uncertainty quantification and Rigorous Experimentation” (SECURE) Grand Challenge LDRD project. This project leverages the offline, controlled nature of cyber experimentation technologies in general, and emulation testbeds in particular, to assess how uncertainties in network conditions affect uncertainties in key metrics. We conduct extensive experimentation using a Firewheel emulation-based cyber testbed model of Invisible Internet Project (I2P) networks to understand a de-anonymization attack formerly presented in the literature. Our goals in this analysis are to see if we can leverage emulation testbeds to produce reliably repeatable experimental networks at scale, identify significant parameters influencing experimental results, replicate the previous results, quantify uncertainty associated with the predictions, and apply multi-fidelity techniques to forecast results to real-world network scales. The I2P networks we study are up to three orders of magnitude larger than the networks studied in SECURE and presented additional challenges to identify significant parameters. The key contributions of this project are the application of SECURE techniques such as UQ to a scenario of interest and scaling the SECURE techniques to larger network sizes. This report describes the experimental methods and results of these studies in more detail. In addition, the process of constructing these large-scale experiments tested the limits of the Firewheel emulation-based technologies. Therefore, another contribution of this work is that it informed the Firewheel developers of scaling limitations, which were subsequently corrected.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, GDSA Framework, for evaluating disposal system performance for nuclear waste in geologic media. GDSA Framework is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign. This report fulfills the GDSA Uncertainty and Sensitivity Analysis Methods work package (SF-21SN01030404) level 3 milestone, Uncertainty and Sensitivity Analysis Methods and Applications in GDSA Framework (FY2021) (M3SF-21SN010304042). It presents high level objectives and strategy for development of uncertainty and sensitivity analysis tools, demonstrates uncertainty quantification (UQ) and sensitivity analysis (SA) tools in GDSA Framework in FY21, and describes additional UQ/SA tools whose future implementation would enhance the UQ/SA capability of GDSA Framework. This work was closely coordinated with the other Sandia National Laboratory GDSA work packages: the GDSA Framework Development work package (SF-21SN01030405), the GDSA Repository Systems Analysis work package (SF-21SN01030406), and the GDSA PFLOTRAN Development work package (SF-21SN01030407). This report builds on developments reported in previous GDSA Framework milestones, particularly M3SF 20SN010304032.
The modern scientific process often involves the development of a predictive computational model. To improve its accuracy, a computational model can be calibrated to a set of experimental data. A variety of validation metrics can be used to quantify this process. Some of these metrics have direct physical interpretations and a history of use, while others, especially those for probabilistic data, are more difficult to interpret. In this work, a variety of validation metrics are used to quantify the accuracy of different calibration methods. Frequentist and Bayesian perspectives are used with both fixed effects and mixed-effects statistical models. Through a quantitative comparison of the resulting distributions, the most accurate calibration method can be selected. Two examples are included which compare the results of various validation metrics for different calibration methods. It is quantitatively shown that, in the presence of significant laboratory biases, a fixed effects calibration is significantly less accurate than a mixed-effects calibration. This is because the mixed-effects statistical model better characterizes the underlying parameter distributions than the fixed effects model. The results suggest that validation metrics can be used to select the most accurate calibration model for a particular empirical model with corresponding experimental data.
The shock hydrodynamics code ALEGRA and the optimization and uncertainty quantification toolkit Dakota are used to calibrate and select between three competing steel yield models, taking uncertainties in the system into account. A Bayesian model selection procedure is used to choose between the models in a systematic, automated fashion, within an uncertainty quantification workflow. Time-series penetration data of a long tungsten-alloy rod impacting a hardened steel plate at approximately 1250 m/s, along with their measurement uncertainty, are used to calibrate and select between the models. The procedure finds that between the Johnson–Cook, Steinberg–Guinan–Lund, and Zerilli–Armstrong stress models, Zerilli–Armstrong performs the best.
Network modeling is a powerful tool to enable rapid analysis of complex systems that can be challenging to study directly using physical testing. Two approaches are considered: emulation and simulation. The former runs real software on virtualized hardware, while the latter mimics the behavior of network components and their interactions in software. Although emulation provides an accurate representation of physical networks, this approach alone cannot guarantee the characterization of the system under realistic operative conditions. Operative conditions for physical networks are often characterized by intrinsic variability (payload size, packet latency, etc.) or a lack of precise knowledge regarding the network configuration (bandwidth, delays, etc.); therefore uncertainty quantification (UQ) strategies should be also employed. UQ strategies require multiple evaluations of the system with a number of evaluation instances that roughly increases with the problem dimensionality, i.e., the number of uncertain parameters. It follows that a typical UQ workflow for network modeling based on emulation can easily become unattainable due to its prohibitive computational cost. In this paper, a multifidelity sampling approach is discussed and applied to network modeling problems. The main idea is to optimally fuse information coming from simulations, which are a low-fidelity version of the emulation problem of interest, in order to decrease the estimator variance. By reducing the estimator variance in a sampling approach it is usually possible to obtain more reliable statistics and therefore a more reliable system characterization. Several network problems of increasing difficulty are presented. For each of them, the performance of the multifidelity estimator is compared with respect to the single fidelity counterpart, namely, Monte Carlo sampling. For all the test problems studied in this work, the multifidelity estimator demonstrated an increased efficiency with respect to MC.
This report presents the results of a collaborative effort under the Verification, Validation, and Uncertainty Quantification (VVUQ) thrust area of the North American Energy Resilience Model (NAERM) program. The goal of the effort described in this report was to integrate the Dakota software with the NAERM software framework to demonstrate sensitivity analysis of a co-simulation for NAERM.
Gaussian process regression is a popular Bayesian framework for surrogate modeling of expensive data sources. As part of a larger effort in scientific machine learning, many recent works have incorporated physical constraints or other a priori information within Gaussian process regression to supplement limited data and regularize the behavior of the model. We provide an overview and survey of several classes of Gaussian process constraints, including positivity or bound constraints, monotonicity and convexity constraints, differential equation constraints provided by linear PDEs, and boundary condition constraints. We compare the strategies behind each approach as well as the differences in implementation, concluding with a discussion of the computational challenges introduced by constraints.
This report summarizes work done under the Verification, Validation, and Uncertainty Quantification (VVUQ) thrust area of the North American Energy Resilience Model (NAERM) Program. The specific task of interest described in this report is focused on sensitivity analysis of scenarios involving failures of both wind turbines and thermal generators under extreme cold-weather temperature conditions as would be observed in a Polar Vortex event.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and highlevel nuclear waste (HLW). A high priority for SFWST disposal R&D is to develop a disposal system modeling and analysis capability for evaluating disposal system performance for nuclear waste in geologic media. This report describes fiscal year (FY) 2020 advances of the Geologic Disposal Safety Assessment (GDSA) Framework and PFLOTRAN development groups of the SFWST Campaign. The common mission of these groups is to develop a geologic disposal system modeling capability for nuclear waste that can be used to probabilistically assess the performance of disposal options and generic sites. The capability is a framework called GDSA Framework that employs high-performance computing (HPC) capable codes PFLOTRAN and Dakota.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST ''Geologic Disposal Safety Assessment'' (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, ''GDSA Framework'', for evaluating disposal system performance for nuclear waste in geologic media. ''GDSA Framework'' is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign. This report fulfills the GDSA Uncertainty and Sensitivity Analysis Methods work package (SF-20SN01030403) level 3 milestone — ''Advances in Uncertainty and Sensitivity Analysis Methods and Applications in GDSA Framework'' (M3SF-20SN010304032). It presents high level objectives and strategy for development of uncertainty and sensitivity analysis tools, demonstrates uncertainty quantification (UQ) and sensitivity analysis (SA) tools in GDSA Framework in FY20, and describes additional UQ/SA tools whose future implementation would enhance the UQ/SA capability of ''GDSA Framework''. This work was closely coordinated with the other Sandia National Laboratory GDSA work packages: the GDSA Framework Development work package (SF- 2051\101030404), the GDSA Repository Systems Analysis work package (SF-2051\101030405), and the GDSA PFLOTRAN Development work package (SF-20SN01030406). This report builds on developments reported in previous ''GDSA Framework'' milestones, particularly M2SF- 19SNO1030403.
This report summarizes work done under the Laboratory Directed Research and Development (LDRD) project titled "Incorporating physical constraints into Gaussian process surrogate models?' In this project, we explored a variety of strategies for constraint implementations. We considered bound constraints, monotonicity and related convexity constraints, Gaussian processes which are constrained to satisfy linear operator constraints which represent physical laws expressed as partial differential equations, and intrinsic boundary condition constraints. We wrote three papers and are currently finishing two others. We developed initial software implementations for some approaches. This report summarizes the work done under this LDRD.
Surrogate model development is a key resource in the scientific modeling community for providing computational expedience when simulating complex systems without loss of great fidelity. The initial step to development of a surrogate model is identification of the primary governing components of the system. Principal component analysis (PCA) is a widely used data science technique that provides inspection of such driving factors, when the objective for modeling is to capture the greatest sources of variance inherent to a dataset. Although an efficient linear dimension reduction tool, PCA makes the fundamental assumption that the data is continuous and normally distributed. Thus, it provides ideal performance when these conditions are met. In the case for which cyber emulations provide realizations of a port scanning scenario, the data to be modeled follows a discrete time series function comprised of monotonically increasing piece-wise constant steps. The sources of variance are related to the timing and magnitude of these steps. Therefore, we consider using XPCA, an extension to PCA for continuous and discrete random variates. This report provides the documentation of the trade-offs between the PCA and XPCA linear dimension reduction algorithms, for the intended purpose to identify key components of greatest variance in our time series data. These components will ultimately provide the basis for future surrogate models of port scanning cyber emulations.
Determining a process–structure–property relationship is the holy grail of materials science, where both computational prediction in the forward direction and materials design in the inverse direction are essential. Problems in materials design are often considered in the context of process–property linkage by bypassing the materials structure, or in the context of structure–property linkage as in microstructure-sensitive design problems. However, there is a lack of research effort in studying materials design problems in the context of process–structure linkage, which has a great implication in reverse engineering. In this work, given a target microstructure, we propose an active learning high-throughput microstructure calibration framework to derive a set of processing parameters, which can produce an optimal microstructure that is statistically equivalent to the target microstructure. The proposed framework is formulated as a noisy multi-objective optimization problem, where each objective function measures a deterministic or statistical difference of the same microstructure descriptor between a candidate microstructure and a target microstructure. Furthermore, to significantly reduce the physical waiting wall-time, we enable the high-throughput feature of the microstructure calibration framework by adopting an asynchronously parallel Bayesian optimization by exploiting high-performance computing resources. Case studies in additive manufacturing and grain growth are used to demonstrate the applicability of the proposed framework, where kinetic Monte Carlo (kMC) simulation is used as a forward predictive model, such that for a given target microstructure, the target processing parameters that produced this microstructure are successfully recovered.
In March and April of 2020 there was widespread concern about availability of medical resources required to treat Covid-19 patients who become seriously ill. A simulation model of supply management was developed to aid understanding of how to best manage available supplies and channel new production. Forecasted demands for critical therapeutic resources have tremendous uncertainty, largely due to uncertainties about the number and timing of patient arrivals. It is therefore essential to evaluate any process for managing supplies in view of this uncertainty. To support such evaluations, we developed a modeling framework that would allow an integrated assessment in the context of uncertainty quantification. At the time of writing there has been no need to execute this framework because adaptations of the medical system have been able to respond effectively to the outbreak. This report documents the framework and its implemented components should need later arise for its application.
As part of the Department of Energy response to the novel coronavirus pandemic of 2020, a modeling effort was sponsored by the DOE Office of Science. One task of this modeling effort at Sandia was to develop a model to predict medical resource needs given various patient arrival scenarios. Resources needed include personnel resources (nurses, ICU nurses, physicians, respiratory therapists), fixed resources (regular or ICU beds and ventilators), and consumable resources (masks, gowns, gloves, face shields, sedatives). This report documents the uncertainty analysis that was performed on the resource model. The uncertainty analysis involved sampling 26 input parameters to the model. The sampling was performed conditional on the patient arrival streams that also were inputs to the model. These patient arrival streams were derived from various epidemiology models and had a significant effect on the projected resource needs. In this report, we document the sampling approach, the parameter ranges used, and the computational workflow necessary to perform large-scale uncertainty studies for every county and state in the United States.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.
The Dakota toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
The mechanical properties of additively manufactured metals tend to show high variability, due largely to the stochastic nature of defect formation during the printing process. This study seeks to understand how automated high throughput testing can be utilized to understand the variable nature of additively manufactured metals at different print conditions, and to allow for statistically meaningful analysis. This is demonstrated by analyzing how different processing parameters, including laser power, scan velocity, and scan pattern, influence the tensile behavior of additively manufactured stainless steel 316L utilizing a newly developed automated test methodology. Microstructural characterization through computed tomography and electron backscatter diffraction is used to understand some of the observed trends in mechanical behavior. Specifically, grain size and morphology are shown to depend on processing parameters and influence the observed mechanical behavior. In the current study, laser-powder bed fusion, also known as selective laser melting or direct metal laser sintering, is shown to produce 316L over a wide processing range without substantial detrimental effect on the tensile properties. Ultimate tensile strengths above 600 MPa, which are greater than that for typical wrought annealed 316L with similar grain sizes, and elongations to failure greater than 40% were observed. It is demonstrated that this process has little sensitivity to minor intentional or unintentional variations in laser velocity and power.
In this work, we develop Gaussian process regression (GPR) models of isotropic hyperelastic material behavior. First, we consider the direct approach of modeling the components of the Cauchy stress tensor as a function of the components of the Finger stretch tensor in a Gaussian process. We then consider an improvement on this approach that embeds rotational invariance of the stress-stretch constitutive relation in the GPR representation. This approach requires fewer training examples and achieves higher accuracy while maintaining invariance to rotations exactly. Finally, we consider an approach that recovers the strain-energy density function and derives the stress tensor from this potential. Although the error of this model for predicting the stress tensor is higher, the strain-energy density is recovered with high accuracy from limited training data. The approaches presented here are examples of physics-informed machine learning. They go beyond purely data-driven approaches by embedding the physical system constraints directly into the Gaussian process representation of materials models.
Securing cyber systems is of paramount importance, but rigorous, evidence-based techniques to support decision makers for high-consequence decisions have been missing. The need for bringing rigor into cybersecurity is well-recognized, but little progress has been made over the last decades. We introduce a new project, SECURE, that aims to bring more rigor into cyber experimentation. The core idea is to follow the footsteps of computational science and engineering and expand similar capabilities to support rigorous cyber experimentation. In this paper, we review the cyber experimentation process, present the research areas that underlie our effort, discuss the underlying research challenges, and report on our progress to date. This paper is based on work in progress, and we expect to have more complete results for the conference.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (FCT) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling. These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) control account, which is charged with developing a geologic repository system modeling and analysis capability, and the associated software, GDSA Framework, for evaluating disposal system performance for nuclear waste in geologic media. GDSA Framework is supported by SFWST Campaign and its predecessor the Used Fuel Disposition (UFD) campaign.
The Spent Fuel and Waste Science and Technology (SFWST) Campaign of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Spent Fuel & Waste Disposition (SFWD) is conducting research and development (R&D) on geologic disposal of spent nuclear fuel (SNF) and high-level nuclear waste (HLW). Two high priorities for SFWST disposal R&D are design concept development and disposal system modeling (DOE 2011, Table 6). These priorities are directly addressed in the SFWST Geologic Disposal Safety Assessment (GDSA) work package, which is charged with developing a disposal system modeling and analysis capability for evaluating disposal system performance for nuclear waste in geologic media.
Tallman, Aaron E.; Stopka, Krzysztof S.; Swiler, Laura P.; Wang, Yan; Kalidindi, Surya R.; Mcdowell, David L.
Data-driven tools for finding structure–property (S–P) relations, such as the Materials Knowledge System (MKS) framework, can accelerate materials design, once the costly and technical calibration process has been completed. A three-model method is proposed to reduce the expense of S–P relation model calibration: (1) direct simulations are performed as per (2) a Gaussian process-based data collection model, to calibrate (3) an MKS homogenization model in an application to α-Ti. The new methods are compared favorably with expert texture selection on the performance of the so-calibrated MKS models. Benefits for the development of new and improved materials are discussed.
Probabilistic simulations of the post-closure performance of a generic deep geologic repository for commercial spent nuclear fuel in shale host rock provide a test case for comparing sensitivity analysis methods available in Geologic Disposal Safety Assessment (GDSA) Framework, the U.S. Department of Energy's state-of-the-art toolkit for repository performance assessment. Simulations assume a thick low-permeability shale with aquifers (potential paths to the biosphere) above and below the host rock. Multi-physics simulations on the 7-million-cell grid are run in a high-performance computing environment with PFLOTRAN. Epistemic uncertain inputs include properties of the engineered and natural systems. The output variables of interest, maximum I-129 concentrations (independent of time) at observation points in the aquifers, vary over several orders of magnitude. Variance-based global sensitivity analyses (i.e., calculations of sensitivity indices) conducted with Dakota use polynomial chaos expansion (PCE) and Gaussian process (GP) surrogate models. Results of analyses conducted with raw output concentrations and with log-transformed output concentrations are compared. Using log-transformed concentrations results in larger sensitivity indices for more influential input variables, smaller sensitivity indices for less influential input variables, and more consistent values for sensitivity indices between methods (PCE and GP) and between analyses repeated with samples of different sizes.
Two surrogate models are under development to rapidly emulate the effects of the Fuel Matrix Degradation (FMD) model in GDSA Framework. One is a polynomial regression surrogate with linear and quadratic fits, and the other is a k-Nearest Neighbors regressor (kNNr) method that operates on a lookup table. Direct coupling of the FMD model to GDSA Framework is too computationally expensive. Preliminary results indicate these surrogate models will enable GDSA Framework to rapidly simulate spent fuel dissolution for each individual breached spent fuel waste package in a probabilistic repository simulation. This capability will allow uncertainties in spent fuel dissolution to be propagated and sensitivities in FMD inputs to be quantified and ranked against other inputs.
Communication networks have evolved to a level of sophistication that requires computer models and numerical simulations to understand and predict their behavior. A network simulator is a software that enables the network designer to model several components of a computer network such as nodes, routers, switches and links and events such as data transmissions and packet errors in order to obtain device and network level metrics. Network simulations, as many other numerical approximations that model complex systems, are subject to the specification of parameters and operative conditions of the system. Very often the full characterization of the system and their input is not possible, therefore Uncertainty Quantification (UQ) strategies need to be deployed to evaluate the statistics of its response and behavior. UQ techniques, despite the advancements in the last two decades, still suffer in the presence of a large number of uncertain variables and when the regularity of the systems response cannot be guaranteed. In this context, multifidelity approaches have gained popularity in the UQ community recently due to their flexibility and robustness with respect to these challenges. The main idea behind these techniques is to extract information from a limited number of high-fidelity model realizations and complement them with a much larger number of a set of lower fidelity evaluations. The final result is an estimator with a much lower variance, i.e. a more accurate and reliable estimator can be obtained. In this contribution we investigate the possibility to deploy multifidelity UQ strategies to computer network analysis. Two numerical configurations are studied based on a simplified network with one client and one server. Preliminary results for these tests suggest that multifidelity sampling techniques might be used as effective tools for UQ tools in network applications.
This report summarizes the data analysis activities that were performed under the Born Qualified Grand Challenge Project from 2016 - 2018. It is meant to document the characterization of additively manufactured parts and processes for this project as well as demonstrate and identify further analyses and data science that could be done relating material processes to microstructure to properties to performance.
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices. The treatment allows one to assume the cross sections are distributed with a multivariate normal distribution, lognormal distribution, or truncated normal distribution.
This SAND report fulfills the final report requirement for the Born Qualified Grand Challenge LDRD. Born Qualified was funded from FY16-FY18 with a total budget of ~$13M over the 3 years of funding. Overall 70+ staff, Post Docs, and students supported this project over its lifetime. The driver for Born Qualified was using Additive Manufacturing (AM) to change the qualification paradigm for low volume, high value, high consequence, complex parts that are common in high-risk industries such as ND, defense, energy, aerospace, and medical. AM offers the opportunity to transform design, manufacturing, and qualification with its unique capabilities. AM is a disruptive technology, allowing the capability to simultaneously create part and material while tightly controlling and monitoring the manufacturing process at the voxel level, with the inherent flexibility and agility in printing layer-by-layer. AM enables the possibility of measuring critical material and part parameters during manufacturing, thus changing the way we collect data, assess performance, and accept or qualify parts. It provides an opportunity to shift from the current iterative design-build-test qualification paradigm using traditional manufacturing processes to design-by-predictivity where requirements are addressed concurrently and rapidly. The new qualification paradigm driven by AM provides the opportunity to predict performance probabilistically, to optimally control the manufacturing process, and to implement accelerated cycles of learning. Exploiting these capabilities to realize a new uncertainty quantification-driven qualification that is rapid, flexible, and practical is the focus of this effort.
Computational modeling and simulation are paramount to modern science. Computational models often replace physical experiments that are prohibitively expensive, dangerous, or occur at extreme scales. Thus, it is critical that these models accurately represent and can be used as replacements for reality. This paper provides an analysis of metrics that may be used to determine the validity of a computational model. While some metrics have a direct physical meaning and a long history of use, others, especially those that compare probabilistic data, are more difficult to interpret. Furthermore, the process of model validation is often application-specific, making the procedure itself challenging and the results difficult to defend. We therefore provide guidance and recommendations as to which validation metric to use, as well as how to use and decipher the results. An example is included that compares interpretations of various metrics and demonstrates the impact of model and experimental uncertainty on validation processes.
The classical problem of calculating the volume of the union of d-dimensional balls is known as "Union Volume." We present line-sampling approximation algorithms for Union Volume. Our methods may be extended to other Boolean operations, such as setminus; or to other shapes, such as hyper-rectangles. The deterministic, exact approaches for Union Volume do not scale well to high dimensions. However, we adapt several of these exact approaches to approximation algorithms based on sampling. We perform local sampling within each ball using lines. We have several variations, depending on how the overlapping volume is partitioned, and depending on whether radial, axis-aligned, or other line patterns are used. Our variations fall within the family of Monte Carlo sampling, and hence have about the same theoretical convergence rate, 1 /$\sqrt{M}$, where M is the number of samples. In our limited experiments, line-sampling proved more accurate per unit work than point samples, because a line sample provides more information, and the analytic equation for a sphere makes the calculation almost as fast. We performed a limited empirical study of the efficiency of these variations. We suggest a more extensive study for future work. We speculate that different ball arrangements, differentiated by the distribution of overlaps in terms of volume and degree, will benefit the most from patterns of line samples that preferentially capture those overlaps. Acknowledgement We thank Karl Bringman for explaining his BF-ApproxUnion (ApproxUnion) algorithm [3] to us. We thank Josiah Manson for pointing out that spoke darts oversample the center and we might get a better answer by uniform sampling. We thank Vijay Natarajan for suggesting random chord sampling. The authors are grateful to Brian Adams, Keith Dalbey, and Vicente Romero for useful technical discussions. This work was sponsored by the Laboratory Directed Research and Development (LDRD) Program at Sandia National Laboratories. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research (ASCR), Applied Mathematics Program. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.
In this study, we focus on a hydrogeological inverse problem specifically targeting monitoring soil moisture variations using tomographic ground penetrating radar (GPR) travel time data. Technical challenges exist in the inversion of GPR tomographic data for handling non-uniqueness, nonlinearity and high-dimensionality of unknowns. We have developed a new method for estimating soil moisture fields from crosshole GPR data. It uses a pilot-point method to provide a low-dimensional representation of the relative dielectric permittivity field of the soil, which is the primary object of inference: the field can be converted to soil moisture using a petrophysical model. We integrate a multi-chain Markov chain Monte Carlo (MCMC)–Bayesian inversion framework with the pilot point concept, a curved-ray GPR travel time model, and a sequential Gaussian simulation algorithm, for estimating the dielectric permittivity at pilot point locations distributed within the tomogram, as well as the corresponding geostatistical parameters (i.e., spatial correlation range). We infer the dielectric permittivity as a probability density function, thus capturing the uncertainty in the inference. The multi-chain MCMC enables addressing high-dimensional inverse problems as required in the inversion setup. The method is scalable in terms of number of chains and processors, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. The proposed inversion approach can successfully approximate the posterior density distributions of the pilot points, and capture the true values. The computational efficiency, accuracy, and convergence behaviors of the inversion approach were also systematically evaluated, by comparing the inversion results obtained with different levels of noises in the observations, increased observational data, as well as increased number of pilot points.
Blue noise sampling has proved useful for many graphics applications, but remains underexplored in high-dimensional spaces due to the difficulty of generating distributions and proving properties about them. We present a blue noise sampling method with good quality and performance across different dimensions. The method, spoke-dart sampling, shoots rays from prior samples and selects samples from these rays. It combines the advantages of two major high-dimensional sampling methods: the locality of advancing front with the dimensionality-reduction of hyperplanes, specifically line sampling. We prove that the output sampling is saturated with high probability, with bounds on distances between pairs of samples and between any domain point and its nearest sample. We demonstrate spoke-dart applications for approximate Delaunay graph construction, global optimization, and robotic motion planning. Both the blue-noise quality of the output distribution and the adaptability of the intermediate processes of our method are useful in these applications.
The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.
This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.
In this study we developed an efficient Bayesian inversion framework for interpreting marine seismic Amplitude Versus Angle and Controlled-Source Electromagnetic data for marine reservoir characterization. The framework uses a multi-chain Markov-chain Monte Carlo sampler, which is a hybrid of DiffeRential Evolution Adaptive Metropolis and Adaptive Metropolis samplers. The inversion framework is tested by estimating reservoir-fluid saturations and porosity based on marine seismic and Controlled-Source Electromagnetic data. The multi-chain Markov-chain Monte Carlo is scalable in terms of the number of chains, and is useful for computationally demanding Bayesian model calibration in scientific and engineering problems. As a demonstration, the approach is used to efficiently and accurately estimate the porosity and saturations in a representative layered synthetic reservoir. The results indicate that the seismic Amplitude Versus Angle and Controlled-Source Electromagnetic joint inversion provides better estimation of reservoir saturations than the seismic Amplitude Versus Angle only inversion, especially for the parameters in deep layers. The performance of the inversion approach for various levels of noise in observational data was evaluated — reasonable estimates can be obtained with noise levels up to 25%. Sampling efficiency due to the use of multiple chains was also checked and was found to have almost linear scalability.
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
Additive manufacturing enables the rapid, cost effective production of customized structural components. To fully capitalize on the agility of additive manufacturing, it is necessary to develop complementary high-throughput materials evaluation techniques. In this study, over 1000 nominally identical tensile tests are used to explore the effect of process variability on the mechanical property distributions of a precipitation hardened stainless steel produced by a laser powder bed fusion process, also known as direct metal laser sintering or selective laser melting. With this large dataset, rare defects are revealed that affect only ≈2% of the population, stemming from a single build lot of material. The rare defects cause a substantial loss in ductility and are associated with an interconnected network of porosity. The adoption of streamlined test methods will be paramount to diagnosing and mitigating such dangerous anomalies in future structural components.
We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.
Swiler, Laura P.; Lefebvre, Robert A.; Langley, Brandon R.; Thompson, Adam B.
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on integrating Dakota into the NEAMS Workbench. The NEAMS Workbench, developed at Oak Ridge National Laboratory, is a new software framework that provides a graphical user interface, input file creation, parsing, validation, job execution, workflow management, and output processing for a variety of nuclear codes. Dakota is a tool developed at Sandia National Laboratories that provides a suite of uncertainty quantification and optimization algorithms. Providing Dakota within the NEAMS Workbench allows users of nuclear simulation codes to perform uncertainty and optimization studies on their nuclear codes from within a common, integrated environment. Details of the integration and parsing are provided, along with an example of Dakota running a sampling study on the fuels performance code, BISON, from within the NEAMS Workbench.
An adage within the Additive Manufacturing (AM) community is that “complexity is free”. Complicated geometric features that normally drive manufacturing cost and limit design options are not typically problematic in AM. While geometric complexity is usually viewed from the perspective of part design, this advantage of AM also opens up new options in rapid, efficient material property evaluation and qualification. In the current work, an array of 100 miniature tensile bars are produced and tested for a comparable cost and in comparable time to a few conventional tensile bars. With this technique, it is possible to evaluate the stochastic nature of mechanical behavior. The current study focuses on stochastic yield strength, ultimate strength, and ductility as measured by strain at failure (elongation). However, this method can be used to capture the statistical nature of many mechanical properties including the full stress-strain constitutive response, elastic modulus, work hardening, and fracture toughness. Moreover, the technique could extend to strain-rate and temperature dependent behavior. As a proof of concept, the technique is demonstrated on a precipitation hardened stainless steel alloy, commonly known as 17-4PH, produced by two commercial AM vendors using a laser powder bed fusion process, also commonly known as selective laser melting. Using two different commercial powder bed platforms, the vendors produced material that exhibited slightly lower strength and markedly lower ductility compared to wrought sheet. Moreover, the properties were much less repeatable in the AM materials as analyzed in the context of a Weibull distribution, and the properties did not consistently meet minimum allowable requirements for the alloy as established by AMS. The diminished, stochastic properties were examined in the context of major contributing factors such as surface roughness and internal lack-of-fusion porosity. This high-throughput capability is expected to be useful for follow-on extensive parametric studies of factors that affect the statistical reliability of AM components.