Current methods for stochastic media transport are either computationally expensive or, by nature, approximate. Moreover, none of the well-developed, benchmarked approximate methods can compute the variance caused by the stochastic mixing, a quantity especially important to safety calculations. Therefore, we derive and apply a new conditional probability function (CPF) for use in the recently developed stochastic media transport algorithm Conditional Point Sampling (CoPS), which 1) leverages the full intra-particle memory of CoPS to yield errorless computation of stochastic media outputs in 1D, binary, Markovian-mixed media, and 2) leverages the full inter-particle memory of CoPS and the recently developed Embedded Variance Deconvolution method to yield computation of the variance in transport outputs caused by stochastic material mixing. Numerical results demonstrate errorless stochastic media transport as compared to reference benchmark solutions with the new CPF for this class of stochastic mixing as well as the ability to compute the variance caused by the stochastic mixing via CoPS. Using previously derived, non-errorless CPFs, CoPS is further found to be more accurate than the atomic mix approximation, Chord Length Sampling (CLS), and most of memory-enhanced versions of CLS surveyed. In addition, we study the compounding behavior of CPF error as a function of cohort size (where a cohort is a group of histories that share intra-particle memory) and recommend that small cohorts be used when computing the variance in transport outputs caused by stochastic mixing.
Thermal spray processes involve the repeated impact of millions of discrete particles, whose melting, deformation, and coating-formation dynamics occur at microsecond timescales. The accumulated coating that evolves over minutes is comprised of complex, multiphase microstructures, and the timescale difference between the individual particle solidification and the overall coating formation represents a significant challenge for analysts attempting to simulate microstructure evolution. In order to overcome the computational burden, researchers have created rule-based models (similar to cellular automata methods) that do not directly simulate the physics of the process. Instead, the simulation is governed by a set of predefined rules, which do not capture the fine-details of the evolution, but do provide a useful approximation for the simulation of coating microstructures. Here, we introduce a new rules-based process model for microstructure formation during thermal spray processes. The model is 3D, allows for an arbitrary number of material types, and includes multiple porosity-generation mechanisms. Example results of the model for tantalum coatings are presented along with sensitivity analyses of model parameters and validation against 3D experimental data. The model's computational efficiency allows for investigations into the stochastic variation of coating microstructures, in addition to the typical process-to-structure relationships.
Conditional Point Sampling (CoPS) is a newly developed Monte Carlo method for computing radiation transport quantities in stochastic media. The algorithm involves a growing list of point-wise material designations during simulation that causes potentially unbounded increases in memory and runtime, making the calculation of probability density functions (PDFs) computationally expensive. In this work, we adapt CoPS by omitting material points used in the computation from being stored in persisting memory if they are within a user-defined “amnesia radius” from neighboring material points already defined within a realization. We conduct numerical studies to investigate trade-offs between accuracy, required computer memory, and computation time. We demonstrate CoPS's ability to produce accurate mean leakage results and PDFs of leakage results while improving memory and runtime through use of an amnesia radius. We show that a limit on required computer memory per cohort of histories and average runtime per history is imposed as a function of a non-zero amnesia radius. We find that, for the benchmark set investigated, using an amnesia radius of ra = 0.01 introduces minimal error (a 0.006 increase in CoPS3PO root mean squared relative error) in results while improving memory and runtime by an order of magnitude for a cohort size of 100.
Sobol' sensitivity indices (SI) provide robust and accurate measures of how much uncertainty in output quantities is caused by different uncertain input parameters. These allow analysts to prioritize future work to either reduce or better quantify the effects of the most important uncertain parameters. One of the most common approaches to computing SI requires Monte Carlo (MC) sampling of uncertain parameters and full physics code runs to compute the response for each of these samples. In the case that the physics code is a MC radiation transport code, this traditional approach to computing SI presents a workflow in which the MC transport calculation must be sufficiently resolved for each MC uncertain parameter sample. This process can be prohibitively expensive, especially since thousands or more particle histories are often required on each of thousands or so uncertain parameter samples. We propose a process for computing SI in which only a few MC radiation transport histories are simulated before sampling new uncertain parameter values. We use Embedded Variance Deconvolution (EVADE) to parse the desired parametric variance from the MC transport variance on each uncertain parameter sample. To provide a relevant benchmark, we propose a new radiation transport benchmark problem and derive analytic solutions for its outputs, including SI. The new EVADE-based approach is found to converge with MC convergence behavior and be at least an order of magnitude more precise for the same computational cost than the traditional approach for several SI on our test problem.
Proceedings of the International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering, M and C 2021
Vu, Emily H.; Brantley, Patrick S.; Olson, Aaron J.; Kiedrowski, Brian C.
We extend the Monte Carlo Chord Length Sampling (CLS) and Local Realization Preserving (LRP) algorithms to the N-ary stochastic medium case using two recently developed uniform and volume fraction models that follow a Markov-chain process for N-ary problems in one-dimensional, Markovian-mixed media. We use the Lawrence Livermore National Laboratory Mercury Monte Carlo particle transport code to compute CLS and LRP reflection and transmission leakage values and material scalar flux distributions for one-dimensional, Markovian-mixed quaternary stochastic media based on the two N-ary stochastic medium models. We conduct accuracy comparisons against benchmark results produced with the Sandia National Laboratories PlaybookMC stochastic media transport research code. We show that CLS and LRP produce exact results for purely absorbing N-ary stochastic medium problems and find that LRP is generally more accurate than CLS for problems with scattering.
Work on radiation transport in stochastic media has tended to focus on binary mixing with Markovian mixing statistics. However, although some real-world applications involve only two materials, others involve three or more. Therefore, we seek to provide a foundation for ongoing theoretical and numerical work with “N-ary” stochastic media comprised of discrete material phases with spatially homogenous Markovian mixing statistics. To accomplish this goal, we first describe a set of parameters and relationships that are useful to characterize such media. In doing so, we make a noteworthy observation: media that are frequently called Poisson media only comprise a subset of those that have Markovian mixing statistics. Since the concept of correlation length (as it has been used in stochastic media transport literature) and the hyperplane realization generation method are both tied to the Poisson property of the media, we argue that not all media with Markovian mixing statistics have a correlation length in this sense or are realizable with the traditional hyperplane generation method. Second, we describe methods for generating realizations of N-ary media with Markovian mixing. We generalize the chord- and hyperplane-based sampling methods from binary to N-ary mixing and propose a novel recursive hyperplane method that can generate a broader class of material structures than the traditional, non-recursive hyperplane method. Finally, we perform numerical studies that provide validation to the proposed N-ary relationships and generation methods in which statistical quantities are observed from realizations of ternary and quaternary media and are shown to agree with predicted values.
Conditional Point Sampling (CoPS) is a recently developed stochastic media transport algorithm that has demonstrated a high degree of accuracy in 1D and 3D simulations implemented for the CPU in Python. However, it is increasingly important that modern, production-level transport codes like CoPS be adapted for use on next-generation computing architectures. In this project, we describe the creation of a fast and accurate variant of CoPS implemented for the GPU in C++. As an initial test, we performed a code-to-code verification using single-history cohorts, which showed that the GPU implementation matched the original CPU implementation to within statistical uncertainty, while improving the speed by over a factor of 4000. We then tested the GPU implementation for cohorts up to size 64 and compared three variants of CoPS based on how the particle histories are grouped into cohorts: successive, simultaneous, and a successive-simultaneous hybrid. We examined the accuracy-efficiency tradeoff of each variant for 9 different benchmarks, measuring the reflectance and transmittance in a cubic geometry with reflecting boundary conditions on the four non-transmissive or reflective faces. Successive cohorts were found to be far more accurate than simultaneous cohorts for both reflectance (4.3 times) and transmittance (5.9 times), although simultaneous cohorts run more than twice as fast as successive cohorts, especially for larger cohorts. The hybrid cohorts demonstrated speed and accuracy behavior most similar to that of simultaneous cohorts. Overall, successive cohorts were found to be more suitable for the GPU due to their greater accuracy and reproducibility, although simultaneous and hybrid cohorts present an enticing prospect for future research.
Conditional Point Sampling (CoPS) is a recently developed stochastic media transport algorithm that has demonstrated a high degree of accuracy in 1-D and 3-D calculations for binary mixtures with Markovian mixing statistics. In theory, CoPS has the capacity to be accurate for material structures beyond just those with Markovian statistics. However, realizing this capability will require development of conditional probability functions (CPFs) that are based, not on explicit Markovian properties, but rather on latent properties extracted from material structures. Here, we describe a first step towards extracting these properties by developing CPFs using deep neural networks (DNNs). Our new approach lays the groundwork for enabling accurate transport on many classes of stochastic media. We train DNNs on ternary stochastic media with Markovian mixing statistics and compare their CPF predictions to those made by existing CoPS CPFs, which are derived based on Markovian mixing properties. We find that the DNN CPF predictions usually outperform the existing approximate CPF predictions, but with wider variance. In addition, even when trained on only one material volume realization, the DNN CPFs are shown to make accurate predictions on other realizations that have the same internal mixing behavior. We show that it is possible to form a useful CoPS CPF by using a DNN to extract correlation properties from realizations of stochastically mixed media, thus establishing a foundation for creating CPFs for mixtures other than those with Markovian mixing, where it may not be possible to derive an accurate analytical CPF.
The accurate construction of a surrogate model is an effective and efficient strategy for performing Uncertainty Quantification (UQ) analyses of expensive and complex engineering systems. Surrogate models are especially powerful whenever the UQ analysis requires the computation of statistics which are difficult and prohibitively expensive to obtain via a direct sampling of the model, e.g. high-order moments and probability density functions. In this paper, we discuss the construction of a polynomial chaos expansion (PCE) surrogate model for radiation transport problems for which quantities of interest are obtained via Monte Carlo simulations. In this context, it is imperative to account for the statistical variability of the simulator as well as the variability associated with the uncertain parameter inputs. More formally, in this paper we focus on understanding the impact of the Monte Carlo transport variability on the recovery of the PCE coefficients. We are able to identify the contribution of both the number of uncertain parameter samples and the number of particle histories simulated per sample in the PCE coefficient recovery. Our theoretical results indicate an accuracy improvement when using few Monte Carlo histories per random sample with respect to configurations with an equivalent computational cost. These theoretical results are numerically illustrated for a simple synthetic example and two configurations of a one-dimensional radiation transport problem in which a slab is represented by means of materials with uncertain cross sections.
Thermal sprayed metal coatings are used in many industrial applications, and characterizing the structure and performance of these materials is vital to understanding their behavior in the field. X-ray Computed Tomography (CT) machines enable volumetric, nondestructive imaging of these materials, but precise segmentation of this grayscale image data into discrete material phases is necessary to calculate quantities of interest related to material structure. In this work, we present a methodology to automate the CT segmentation process as well as quantify uncertainty in segmentations via deep learning. Neural networks (NNs) are shown to accurately segment full resolution CT scans of thermal sprayed materials and provide maps of uncertainty that conservatively bound the predicted geometry. These bounds are propagated through calculations of material properties such as porosity that may provide an understanding of anticipated behavior in the field.