Traditional Monte Carlo methods for particle transport utilize source iteration to express the solution, the flux density, of the transport equation as a Neumann series. Our contribution is to show that the particle paths simulated within source iteration are associated with the adjoint flux density and the adjoint particle paths are associated with the flux density. We make our assertion rigorous through the use of stochastic calculus by representing the particle path used in source iteration as a solution to a stochastic differential equation (SDE). The solution to the adjoint Boltzmann equation is then expressed in terms of the same SDE, and the solution to the Boltzmann equation is expressed in terms of the SDE associated with the adjoint particle process. An important consequence is that the particle paths used within source iteration simultaneously provide Monte Carlo samples of the flux density and adjoint flux density in the detector and source regions, respectively. The significant practical implication is that particle trajectories can be reused to obtain both forward and adjoint quantities of interest. To the best our knowledge, the reuse of entire particles paths has not appeared in the literature. Monte Carlo simulations are presented to support the reuse of the particle paths.
The sensitivity analysis algorithms that have been developed by the radiation transport community in multiple neutron transport codes, such as MCNP and SCALE, are extensively used by fields such as the nuclear criticality community. However, these techniques have seldom been considered for electron transport applications. In the past, the differential-operator method with the single scatter capability has been implemented in Sandia National Laboratories’ Integrated TIGER Series (ITS) coupled electron-photon transport code. This work is meant to extend the available sensitivity estimation techniques in ITS by implementing an adjoint-based sensitivity method, GEAR-MC, to strengthen its sensitivity analysis capabilities. To ensure the accuracy of this method being extended to coupled electron-photon transport, it is compared against the central-difference and differential-operator methodologies to estimate sensitivity coefficients for an experiment performed by McLaughlin and Hussman. Energy deposition sensitivities were calculated using all three methods, and the comparison between them has provided confidence in the accuracy of the newly implemented method. Unlike the current implementation of the differential-operator method in ITS, the GEAR-MC method was implemented with the option to calculate the energy-dependent energy deposition sensitivities, which are the sensitivity coefficients for energy deposition tallies to energy-dependent cross sections. The energy-dependent cross sections could be the cross sections for the material, elements in the material, or reactions of interest for the element. These sensitivities were compared to the energy-integrated sensitivity coefficients and exhibited a maximum percentage difference of 2.15%.
Shands, Emerson W.; Morel, Jim E.; Ahrens, Cory D.; Franke, Brian C.
We derive a new Galerkin quadrature (GQ) method for S (Formula presented.) calculations that differs from the two methods preceding it in that a matrix inverse for an (Formula presented.) matrix, where (Formula presented.) is the number of directions in the quadrature set, is no longer required. Galerkin quadrature methods are designed for calculations with highly anisotropic scattering. Such methods are not simply special angular quadratures but also are methods for representing the S (Formula presented.) scattering source that offers several advantages relative to the standard scattering source representation when highly truncated Legendre cross-section expansions must be used. Galerkin quadrature methods are also useful when the scattering is moderately anisotropic, but the quadrature being used is not sufficiently accurate for the order of the scattering source expansion that is required. We derive the new method and present computational results showing that its performance for two challenging problems is comparable to those of the two GQ methods that preceded it.
A technique using the photon kerma cross section for a material in combination with the number fraction from a photon energy spectrum has been developed to determine the estimated subzone dimension needed to provide an energy deposition profile in radiation transport calculations. The technique was verified using the ITS code for monoenergetic photon sources and a selection of photon spectra. A Python script was written to use the CEPXS cross-section file with a Rapture calculated transmission spectrum to provide the dimensional estimates in a rapid fashion. The script is available for SNL users through the corporate gitlab server.
ITS is a powerful software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the make system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Computational design-based optimization is a well-used tool in science and engineering. Our report documents the successful use of a particle sensitivity analysis for design-based optimization within Monte Carlo sampling-based particle simulation—a currently unavailable capability. Such a capability enables the particle simulation communities to go beyond forward simulation and promises to reduce the burden on overworked analysts by getting more done with less computation.
Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.
Proceedings of the 14th International Conference on Radiation Shielding and 21st Topical Meeting of the Radiation Protection and Shielding Division, ICRS 2022/RPSD 2022
The probability distribution of the number of collisions experienced by electrons slowing down below a threshold energy is investigated to understand the impact of statistical distribution of energy losses on computational efficiency of Monte Carlo simulations. A theoretical model based on an exponentially peaked differential cross section with parameters that reproduce the exact stopping power and straggling at a fixed energy is shown to yield a Poisson distribution for the collision number distribution. However, simulation with realistic energy-loss physics, including both inelastic and bremsstrahlung energy loss interactions, reveal significant departures from the Poisson distribution. In particular, the low collision numbers are more prominent when true cross sections are employed while a Poisson distribution constructed with the exact variance-to-mean ratio is found to be unrealistically peaked. Detailed numerical investigations show that collisions with large energy losses, although infrequent, are statistically important in electron slowing down.
We propose to develop a computational sensitivity analysis capability for Monte Carlo sampling-based particle simulation relevant to Aleph, Cheetah-MC, Empire, Emphasis, ITS, SPARTA, and LAMMPS codes. These software tools model plasmas, radiation transport, low-density fluids, and molecular motion. Our report demonstrates how adjoint optimization methods can be combined with Monte Carlo sampling-based adjoint particle simulation. Our goal is to develop a sensitivity analysis to drive robust design-based optimization for Monte Carlo sampling-based particle simulation - a currently unavailable capability.
Computing stands to be radically improved by neuromorphic computing (NMC) approaches inspired by the brain's incredible efficiency and capabilities. Most NMC research, which aims to replicate the brain's computational structure and architecture in man-made hardware, has focused on artificial intelligence; however, less explored is whether this brain-inspired hardware can provide value beyond cognitive tasks. We demonstrate that high-degree parallelism and configurability of spiking neuromorphic architectures makes them well-suited to implement random walks via discrete time Markov chains. Such random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Additionally, we show how the mathematical basis for a probabilistic solution involving a class of stochastic differential equations can leverage those simulations to provide solutions for a range of broadly applicable computational tasks. Despite being in an early development stage, we find that NMC platforms, at a sufficient scale, can drastically reduce the energy demands of high-performance computing platforms.
The widely parallel, spiking neural networks of neuromorphic processors can enable computationally powerful formulations. While recent interest has focused on primarily machine learning tasks, the space of appropriate applications is wide and continually expanding. Here, we leverage the parallel and event-driven structure to solve a steady state heat equation using a random walk method. The random walk can be executed fully within a spiking neural network using stochastic neuron behavior, and we provide results from both IBM TrueNorth and Intel Loihi implementations. Additionally, we position this algorithm as a potential scalable benchmark for neuromorphic systems.
We will develop Malliavin estimators for Monte Carlo radiation transport by formulating the governing jump stochastic differential equation and deriving the applicable estimators that produce sensitivities for our equations. Efficient and effective sensitivity can be used for design optimization and uncertainty quantification with broad utilization for radiation environments. The technology demonstration will lower development risk for other particle-based simulation methods.
We describe the three electron-transport algorithms that have been implemented in the ITS Monte Carlo codes. While the underlying cross-section data is similar, each uses a fundamentally unique method, which at a high level are best characterized as condensed history, multigroup, and single scatter. Through a set of comparisons with experimental data and some comparisons of purely numerical results, we discuss various attributes of each of the algorithms and show some of the defects that can affect results.
The design of satellites usually includes the objective of minimizing mass due to high launch costs, which is complicated by the need to protect sensitive electronics from the space radiation environment. There is growing interest in automated design optimization techniques to help achieve that objective. Traditional optimization approaches that rely exclusively on response functions (e.g. dose calculations) can be quite expensive when applied to transport problems. Previously we showed how adjoint-based transport sensitivities used in conjunction with gradient-based optimization algorithms can be quite effective in designing mass-efficient electron/proton shields in one-dimensional slab geometries. In this paper we extend that work to two-dimensional Cartesian geometries. This consists primarily of deriving the sensitivities to geometric changes, given a particular prescription for parametrizing the shield geometry. We incorporate these sensitivities into our optimization process and demonstrate their effectiveness in such design calculations.
Morel, Jim E.; Warsa, James S.; Franke, Brian C.; Prinja, Anil K.
We compare two methods for generating Galerkin quadratures. In method 1, the standard SN method is used to generate the moment-to-discrete matrix and the discrete-to-moment matrix is generated by inverting the moment-to-discrete matrix. This is a particular form of the original Galerkin quadrature method. In method 2, which we introduce here, the standard SN method is used to generate the discreteto-moment matrix and the moment-to-discrete matrix is generated by inverting the discrete-to-moment matrix. With an N-point quadrature, method 1 has the advantage that it preserves N eigenvalues and N eigenvectors of the scattering operator in a pointwise sense. With an N-point quadrature, method 2 has the advantage that it generates consistent angular moment equations from the corresponding SN equations while preserving N eigenvalues of the scattering operator. Our computational results indicate that these two methods are quite comparable for the test problem considered.
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
We present an improved deterministic method for analyzing transport problems in random media. In the original method realizations were generated by means of a product quadrature rule; transport calculations were performed on each realization and the results combined to produce ensemble averages. In the present work we recognize that many of these realizations yield identical transport problems. We describe a method to generate only unique transport problems with the proper weighting to produce identical ensemble-averaged results at reduced computational cost. We also describe a method to ignore relatively unimportant realizations in order to obtain nearly identical results with further reduction in costs. Our results demonstrate that these changes allow for the analysis of problems of greater complexity than was practical for the original algorithm.
Stochastic media transport problems have long posed challenges for accurate modeling. Brute force Monte Carlo or deterministic sampling of realizations can be expensive in order to achieve the desired accuracy. The well-known Levermore-Pomraning (LP) closure is very simple and inexpensive, but is inaccurate in many circumstances. We propose a generalization to the LP closure that may help bridge the gap between the two approaches. Our model consists of local calculations to approximately determine the relationship between ensemble-averaged angular fluxes and the corresponding averages at material interfaces. The expense and accuracy of the method are related to how "local" the model is and how much local detail it contains. We show through numerical results that our approach is more accurate than LP for benchmark problems, provided that we capture enough local detail. Thus we identify two approaches to using ensemble calculations for stochastic media calculations: direct averaging of ensemble results for transport quantities of interest, or indirect use via a generalized LP equation to determine those same quantities; in some cases the latter method is more efficient. However, the method is subject to creating ill-posed problems if insufficient local detail is included in the model.
A Monte Carlo solution method for the system of deterministic equations arising in the application of stochastic collocation (SCM) and stochastic Galerkin (SGM) methods in radiation transport computations with uncertainty is presented for an arbitrary number of materials each containing two uncertain random cross sections. Moments of the resulting random flux are calculated using an intrusive and a non-intrusive Monte Carlo based SCM and two different SGM implementations each with two different truncation methods and compared to the brute force Monte Carlo sampling approach. For the intrusive SCM and SGM, a single set of particle histories is solved and weight adjustments are used to produce flux moments for the stochastic problem. Memory and runtime scaling of each method is compared for increased complexity in stochastic dimensionality and moment truncation. Results are also compared for efficiency in terms of a statistical figure-of-merit. The memory savings for the total-order truncation method prove significant over the full-tensor-product truncation. Scaling shows relatively constant cost per moment calculated of SCM and tensor-product SGM. Total-order truncation may be worthwhile despite poorer runtime scaling by achieving better accuracy at lower cost. The figure-of-merit results show that all of the intrusive methods can improve efficiency for calculating low-order moments, but the intrusive SCM approach is the most efficient for calculating high-order moments.
The master equation has been used to examine properties of transport in stochastic media. It has been shown previously that not only may the Levermore-Pomraning (LP) model be derived from the master equation for a description of ensemble-averaged transport quantities, but also that equations describing higher-order statistical moments may be obtained. We examine in greater detail the equations governing the second moments of the distribution of the angular fluxes, from which variances may be computed. We introduce a simple closure for these equations, as well as several models for estimating the variances of derived transport quantities. We revisit previous benchmarks for transport in stochastic media in order to examine the error of these new variance models. We find, not surprisingly, that the errors in these variance estimates are at least as large as the corresponding estimates of the average, and sometimes much larger. We also identify patterns in these variance estimates that may help guide the construction of more accurate models.
This is the final report on the LDRD, though the interested reader is referred to the ANS Transactions paper which more thoroughly documents the technical work of this project.
We describe a method that enables Monte Carlo calculations to automatically achieve a user-prescribed error of representation for numerical results. Our approach is to iteratively adapt Monte Carlo functional-expansion tallies (FETs). The adaptivity is based on assessing the cellwise 2-norm of error due to both functional-expansion truncation and statistical uncertainty. These error metrics have been detailed by others for one-dimensional distributions. We extend their previous work to threedimensional distributions and demonstrate the use of these error metrics for adaptivity. The method examines Monte Carlo FET results, estimates truncation and uncertainty error, and suggests a minimumrequired expansion order and run time to achieve the desired level of error. Iteration is required for results to converge to the desired error. Our implementation of adaptive FETs is observed to converge to reasonable levels of desired error for the representation of four distributions. In practice, some distributions and desired error levels may require prohibitively large expansion orders and/or Monte Carlo run times.
We describe a method that enables Monte Carlo calculations to automatically achieve a user-prescribed error of representation for numerical results. Our approach is to iteratively adapt Monte Carlo functional-expansion tallies (FETs). The adaptivity is based on assessing the cellwise 2-norm of error due to both functional-expansion truncation and statistical uncertainty. These error metrics have been detailed by others for one-dimensional distributions. We extend their previous work to threedimensional distributions and demonstrate the use of these error metrics for adaptivity. The method examines Monte Carlo FET results, estimates truncation and uncertainty error, and suggests a minimumrequired expansion order and run time to achieve the desired level of error. Iteration is required for results to converge to the desired error. Our implementation of adaptive FETs is observed to converge to reasonable levels of desired error for the representation of four distributions. In practice, some distributions and desired error levels may require prohibitively large expansion orders and/or Monte Carlo run times.
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
Extremely short collision mean free paths and near-singular elastic and inelastic differential cross sections (DCS) make analog Monte Carlo simulation an impractical tool for charged particle transport. The widely used alternative, the condensed history method, while efficient, also suffers from several limitations arising from the use of precomputed smooth distributions for sampling. There is much interest in developing computationally efficient algorithms that implement the correct transport mechanics. Here we present a nonanalog transport-based method that incorporates the correct transport mechanics and is computationally efficient for implementation in single event Monte Carlo codes. Our method systematically preserves important physics and is mathematically rigorous. It builds on higher order Fokker-Planck and Boltzmann Fokker-Planck representations of the scattering and energy-loss process, and we accordingly refer to it as a Generalized Boltzmann Fokker-Planck (GBFP) approach. We postulate the existence of nonanalog single collision scattering and energy-loss distributions (differential cross sections) and impose the constraint that the first few momentum transfer and energy loss moments be identical to corresponding analog values. This is effected through a decomposition or hybridizing scheme wherein the singular forward peaked, small energy-transfer collisions are isolated and de-singularized using different moment-preserving strategies, while the large angle, large energy-transfer collisions are described by the exact (analog) DCS or approximated to a high degree of accuracy. The inclusion of the latter component allows the higher angle and energy-loss moments to be accurately captured. This procedure yields a regularized transport model characterized by longer mean free paths and smoother scattering and energy transfer kernels than analog. In practice, acceptable accuracy is achieved with two rigorously preserved moments, but accuracy can be systematically increased to analog level by preserving successively higher moments with almost no change to the algorithm. Details of specific moment-preserving strategies will be described and results presented for dose in heterogeneous media due to a pencil beam and a line source of monoenergetic electrons. Error and runtimes of our nonanalog formulations will be contrasted against condensed history implementations.
This document describes the modeling of the physics (and eventually features) in the Integrated TIGER Series (ITS) codes [Franke 04] which is largely pulled from various sources in the open literature (especially [Seltzer 88], [Seltzer 91], [Lorence 89], [Halbleib 92]), although those sources often describe the ETRAN Code from which the physics engine of ITS is derived, not necessarily identical. This is meant to be an evolving document, with more coverage and detail as time goes on. As such, entire sections are still incomplete. Presently, this document covers the continuous-energy ITS codes with more completeness on photon transport (though electron transport will not be completely ignored). In particular, this document does not cover the Multigroup code, MCODES (externally applied electromagnetic fields), or high-energy phenomena (photon pair-production). In this version, equations are largely left to the references though they may be pulled in over time.
This test plan describes the testing strategy for the ITS (Integrated-TIGER-Series) suite of codes. The processes and procedures for performing both verification and validation tests are described. ITS Version 5.0 was developed under the NNSA's ASC program and supports Sandia's stockpile stewardship mission.
ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.
We consider the steady-state transport of normally incident pencil beams of radiation in slabs of material. A method has been developed for determining the exact radial moments of three-dimensional (3-D) beams of radiation as a function of depth into the slab, by solving systems of one-dimensional (1-D) transport equations. We implement these radial-moment equations in the ONEBFP discrete ordinates code and simulate energy-dependent, coupled electron-photon beams using CEPXS-generated cross sections. Modified PN synthetic acceleration is employed to speed up the iterative convergence of the 1-D charged-particle calculations. For high-energy photon beams, a hybrid Monte Carlo/discrete ordinates method is examined. We demonstrate the efficiency of the calculations and make comparisons with 3-D Monte Carlo calculations. Thus, by solving 1-D transport equations, we obtain realistic multidimensional information concerning the broadening of electron-photon beams. This information is relevant to fields such as industrial radiography, medical imaging, radiation oncology, particle accelerators, and lasers.