Shands, Emerson W.; Morel, Jim E.; Ahrens, Cory D.; Franke, Brian C.
We derive a new Galerkin quadrature (GQ) method for S (Formula presented.) calculations that differs from the two methods preceding it in that a matrix inverse for an (Formula presented.) matrix, where (Formula presented.) is the number of directions in the quadrature set, is no longer required. Galerkin quadrature methods are designed for calculations with highly anisotropic scattering. Such methods are not simply special angular quadratures but also are methods for representing the S (Formula presented.) scattering source that offers several advantages relative to the standard scattering source representation when highly truncated Legendre cross-section expansions must be used. Galerkin quadrature methods are also useful when the scattering is moderately anisotropic, but the quadrature being used is not sufficiently accurate for the order of the scattering source expansion that is required. We derive the new method and present computational results showing that its performance for two challenging problems is comparable to those of the two GQ methods that preceded it.
Traditional Monte Carlo methods for particle transport utilize source iteration to express the solution, the flux density, of the transport equation as a Neumann series. Our contribution is to show that the particle paths simulated within source iteration are associated with the adjoint flux density and the adjoint particle paths are associated with the flux density. We make our assertion rigorous through the use of stochastic calculus by representing the particle path used in source iteration as a solution to a stochastic differential equation (SDE). The solution to the adjoint Boltzmann equation is then expressed in terms of the same SDE, and the solution to the Boltzmann equation is expressed in terms of the SDE associated with the adjoint particle process. An important consequence is that the particle paths used within source iteration simultaneously provide Monte Carlo samples of the flux density and adjoint flux density in the detector and source regions, respectively. The significant practical implication is that particle trajectories can be reused to obtain both forward and adjoint quantities of interest. To the best our knowledge, the reuse of entire particles paths has not appeared in the literature. Monte Carlo simulations are presented to support the reuse of the particle paths.
The sensitivity analysis algorithms that have been developed by the radiation transport community in multiple neutron transport codes, such as MCNP and SCALE, are extensively used by fields such as the nuclear criticality community. However, these techniques have seldom been considered for electron transport applications. In the past, the differential-operator method with the single scatter capability has been implemented in Sandia National Laboratories’ Integrated TIGER Series (ITS) coupled electron-photon transport code. This work is meant to extend the available sensitivity estimation techniques in ITS by implementing an adjoint-based sensitivity method, GEAR-MC, to strengthen its sensitivity analysis capabilities. To ensure the accuracy of this method being extended to coupled electron-photon transport, it is compared against the central-difference and differential-operator methodologies to estimate sensitivity coefficients for an experiment performed by McLaughlin and Hussman. Energy deposition sensitivities were calculated using all three methods, and the comparison between them has provided confidence in the accuracy of the newly implemented method. Unlike the current implementation of the differential-operator method in ITS, the GEAR-MC method was implemented with the option to calculate the energy-dependent energy deposition sensitivities, which are the sensitivity coefficients for energy deposition tallies to energy-dependent cross sections. The energy-dependent cross sections could be the cross sections for the material, elements in the material, or reactions of interest for the element. These sensitivities were compared to the energy-integrated sensitivity coefficients and exhibited a maximum percentage difference of 2.15%.
A technique using the photon kerma cross section for a material in combination with the number fraction from a photon energy spectrum has been developed to determine the estimated subzone dimension needed to provide an energy deposition profile in radiation transport calculations. The technique was verified using the ITS code for monoenergetic photon sources and a selection of photon spectra. A Python script was written to use the CEPXS cross-section file with a Rapture calculated transmission spectrum to provide the dimensional estimates in a rapid fashion. The script is available for SNL users through the corporate gitlab server.
ITS is a powerful software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the make system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 95. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Computational design-based optimization is a well-used tool in science and engineering. Our report documents the successful use of a particle sensitivity analysis for design-based optimization within Monte Carlo sampling-based particle simulation—a currently unavailable capability. Such a capability enables the particle simulation communities to go beyond forward simulation and promises to reduce the burden on overworked analysts by getting more done with less computation.
Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.
Proceedings of the 14th International Conference on Radiation Shielding and 21st Topical Meeting of the Radiation Protection and Shielding Division, ICRS 2022/RPSD 2022
The probability distribution of the number of collisions experienced by electrons slowing down below a threshold energy is investigated to understand the impact of statistical distribution of energy losses on computational efficiency of Monte Carlo simulations. A theoretical model based on an exponentially peaked differential cross section with parameters that reproduce the exact stopping power and straggling at a fixed energy is shown to yield a Poisson distribution for the collision number distribution. However, simulation with realistic energy-loss physics, including both inelastic and bremsstrahlung energy loss interactions, reveal significant departures from the Poisson distribution. In particular, the low collision numbers are more prominent when true cross sections are employed while a Poisson distribution constructed with the exact variance-to-mean ratio is found to be unrealistically peaked. Detailed numerical investigations show that collisions with large energy losses, although infrequent, are statistically important in electron slowing down.
We propose to develop a computational sensitivity analysis capability for Monte Carlo sampling-based particle simulation relevant to Aleph, Cheetah-MC, Empire, Emphasis, ITS, SPARTA, and LAMMPS codes. These software tools model plasmas, radiation transport, low-density fluids, and molecular motion. Our report demonstrates how adjoint optimization methods can be combined with Monte Carlo sampling-based adjoint particle simulation. Our goal is to develop a sensitivity analysis to drive robust design-based optimization for Monte Carlo sampling-based particle simulation - a currently unavailable capability.
Computing stands to be radically improved by neuromorphic computing (NMC) approaches inspired by the brain's incredible efficiency and capabilities. Most NMC research, which aims to replicate the brain's computational structure and architecture in man-made hardware, has focused on artificial intelligence; however, less explored is whether this brain-inspired hardware can provide value beyond cognitive tasks. We demonstrate that high-degree parallelism and configurability of spiking neuromorphic architectures makes them well-suited to implement random walks via discrete time Markov chains. Such random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Additionally, we show how the mathematical basis for a probabilistic solution involving a class of stochastic differential equations can leverage those simulations to provide solutions for a range of broadly applicable computational tasks. Despite being in an early development stage, we find that NMC platforms, at a sufficient scale, can drastically reduce the energy demands of high-performance computing platforms.
The widely parallel, spiking neural networks of neuromorphic processors can enable computationally powerful formulations. While recent interest has focused on primarily machine learning tasks, the space of appropriate applications is wide and continually expanding. Here, we leverage the parallel and event-driven structure to solve a steady state heat equation using a random walk method. The random walk can be executed fully within a spiking neural network using stochastic neuron behavior, and we provide results from both IBM TrueNorth and Intel Loihi implementations. Additionally, we position this algorithm as a potential scalable benchmark for neuromorphic systems.