The sensitivity analysis algorithms that have been developed by the radiation transport community in multiple neutron transport codes, such as MCNP and SCALE, are extensively used by fields such as the nuclear criticality community. However, these techniques have seldom been considered for electron transport applications. In the past, the differential-operator method with the single scatter capability has been implemented in Sandia National Laboratories’ Integrated TIGER Series (ITS) coupled electron-photon transport code. This work is meant to extend the available sensitivity estimation techniques in ITS by implementing an adjoint-based sensitivity method, GEAR-MC, to strengthen its sensitivity analysis capabilities. To ensure the accuracy of this method being extended to coupled electron-photon transport, it is compared against the central-difference and differential-operator methodologies to estimate sensitivity coefficients for an experiment performed by McLaughlin and Hussman. Energy deposition sensitivities were calculated using all three methods, and the comparison between them has provided confidence in the accuracy of the newly implemented method. Unlike the current implementation of the differential-operator method in ITS, the GEAR-MC method was implemented with the option to calculate the energy-dependent energy deposition sensitivities, which are the sensitivity coefficients for energy deposition tallies to energy-dependent cross sections. The energy-dependent cross sections could be the cross sections for the material, elements in the material, or reactions of interest for the element. These sensitivities were compared to the energy-integrated sensitivity coefficients and exhibited a maximum percentage difference of 2.15%.
Monte Carlo simulations are at the heart of many high-fidelity simulations and analyses for radiation transport systems. As is the case with any complex computational model, it is important to propagate sources of input uncertainty and characterize how they affect model output. Unfortunately, uncertainty quantification (UQ) is made difficult by the stochastic variability that Monte Carlo transport solvers introduce. The standard method to avoid corrupting the UQ statistics with the transport solver noise is to increase the number of particle histories, resulting in very high computational costs. In this contribution, we propose and analyze a sampling estimator based on the law of total variance to compute UQ variance even in the presence of residual noise from Monte Carlo transport calculations. We rigorously derive the statistical properties of the new variance estimator, compare its performance to that of the standard method, and demonstrate its use on neutral particle transport model problems involving both attenuation and scattering physics. We illustrate, both analytically and numerically, the estimator's statistical performance as a function of available computational budget and the distribution of that budget between UQ samples and particle histories. We show analytically and corroborate numerically that the new estimator is unbiased, unlike the standard approach, and is more accurate and precise than the standard estimator for the same computational budget.
Heterogenous materials under shock compression can be expected to reach different shock states throughout the material according to local differences in microstructure and the history of wave propagation. Here, a compact, multiple-beam focusing optic assembly is used with high-speed velocimetry to interrogate the shock response of porous tantalum films prepared through thermal-spray deposition. The distribution of particle velocities across a shocked interface is compared to results obtained using a set of defocused interferometric beams that sampled the shock response over larger areas. The two methods produced velocity distributions along the shock plateau with the same mean, while a larger variance was measured with narrower beams. The finding was replicated using three-dimensional, mesoscopically resolved hydrodynamics simulations of solid tantalum with a pore structure mimicking statistical attributes of the material and accounting for radial divergence of the beams, with agreement across several impact velocities. Accounting for pore morphology in the simulations was found to be necessary for replicating the rise time of the shock plateau. The validated simulations were then used to show that while the average velocity along the shock plateau could be determined accurately with only a few interferometric beams, accurately determining the width of the velocity distribution, which here was approximately Gaussian, required a beam dimension much smaller than the spatial correlation lengthscale of the velocity field, here by a factor of ∼30×, with implications for the study of other porous materials.
In the 1970’s and 1980’s, researchers at Sandia National Laboratories produced electron albedo data for a range of materials. Since that time, the electron albedo data has been used for a wide variety of purposes including the validation of Monte Carlo electron transport codes. This report was compiled to examine the electron albedo experiment results in the context of Integrated Tiger Series (ITS) validation. The report presents tables and figures that could provide insight into the underlying model form uncertainty present in the ITS code. Additionally, the report provides data on potential means to reduce these model form errors by highlighting potential refinements in the cross-section generation process.
The Integrated TIGER Series (ITS) transport code is a valuable tool for photon-electron transport. A seven-problem validation suite exists to make sure that the ITS transport code works as intended. It is important to ensure that data from benchmark problems is correctly compared to simulated data. Additionally, the validation suite did not previously make use of a consistent quantitative metric for comparing experimental and simulated datasets. To this end, the goal of this long-term project was to expand the validation suite both in problem type and in the quality of the error assessment. To accomplish that, the seven validation problems in the suite were examined for potential drawbacks. When a drawback was identified, the problems were ranked based on severity of the drawback and approachability of a solution. We determined that meaningful improvements could be made to the validation suite by improving the analysis for the Lockwood Albedo problem and by introducing the Ross dataset as an eighth problem to the suite. The Lockwood error analysis has been completed and will be integrated in the future. The Ross data is unfinished, but significant progress has been made towards analysis.
Proceedings of the 14th International Conference on Radiation Shielding and 21st Topical Meeting of the Radiation Protection and Shielding Division, ICRS 2022/RPSD 2022