Publications

Results 1–25 of 35

Search results

Jump to search filters

A Stochastic Calculus Approach to Boltzmann Transport

Nuclear Science and Engineering

Smith, John D.; Lehoucq, Richard B.; Franke, Brian C.

Traditional Monte Carlo methods for particle transport utilize source iteration to express the solution, the flux density, of the transport equation as a Neumann series. Our contribution is to show that the particle paths simulated within source iteration are associated with the adjoint flux density and the adjoint particle paths are associated with the flux density. We make our assertion rigorous through the use of stochastic calculus by representing the particle path used in source iteration as a solution to a stochastic differential equation (SDE). The solution to the adjoint Boltzmann equation is then expressed in terms of the same SDE, and the solution to the Boltzmann equation is expressed in terms of the SDE associated with the adjoint particle process. An important consequence is that the particle paths used within source iteration simultaneously provide Monte Carlo samples of the flux density and adjoint flux density in the detector and source regions, respectively. The significant practical implication is that particle trajectories can be reused to obtain both forward and adjoint quantities of interest. To the best our knowledge, the reuse of entire particles paths has not appeared in the literature. Monte Carlo simulations are presented to support the reuse of the particle paths.

More Details

Stochastic Neuromorphic Circuits for Solving MAXCUT

Proceedings - 2023 IEEE International Parallel and Distributed Processing Symposium, IPDPS 2023

Theilman, Bradley; Wang, Yipu W.; Parekh, Ojas D.; Severa, William M.; Smith, John D.; Aimone, James B.

Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.

More Details

Stochastic Neuromorphic Circuits for Solving MAXCUT

Proceedings - 2023 IEEE International Parallel and Distributed Processing Symposium, IPDPS 2023

Theilman, Bradley; Wang, Yipu W.; Parekh, Ojas D.; Severa, William M.; Smith, John D.; Aimone, James B.

Finding the maximum cut of a graph (MAXCUT) is a classic optimization problem that has motivated parallel algorithm development. While approximate algorithms to MAXCUT offer attractive theoretical guarantees and demonstrate compelling empirical performance, such approximation approaches can shift the dominant computational cost to the stochastic sampling operations. Neuromorphic computing, which uses the organizing principles of the nervous system to inspire new parallel computing architectures, offers a possible solution. One ubiquitous feature of natural brains is stochasticity: the individual elements of biological neural networks possess an intrinsic randomness that serves as a resource enabling their unique computational capacities. By designing circuits and algorithms that make use of randomness similarly to natural brains, we hypothesize that the intrinsic randomness in microelectronics devices could be turned into a valuable component of a neuromorphic architecture enabling more efficient computations. Here, we present neuromorphic circuits that transform the stochastic behavior of a pool of random devices into useful correlations that drive stochastic solutions to MAXCUT. We show that these circuits perform favorably in comparison to software solvers and argue that this neuromorphic hardware implementation provides a path for scaling advantages. This work demonstrates the utility of combining neuromorphic principles with intrinsic randomness as a computational resource for new computational architectures.

More Details

A review of non-cognitive applications for neuromorphic computing

Neuromorphic Computing and Engineering

Aimone, James B.; Date, Prasanna; Fonseca-Guerra, Gabriel A.; Hamilton, Kathleen E.; Henke, Kyle; Kay, Bill; Kenyon, Garrett T.; Kulkarni, Shruti R.; Parsa, Maryam; Schuman, Catherine D.; Severa, William M.; Smith, John D.

Though neuromorphic computers have typically targeted applications in machine learning and neuroscience (‘cognitive’ applications), they have many computational characteristics that are attractive for a wide variety of computational problems. In this work, we review the current state-of-the-art for non-cognitive applications on neuromorphic computers, including simple computational kernels for composition, graph algorithms, constrained optimization, and signal processing. We discuss the advantages of using neuromorphic computers for these different applications, as well as the challenges that still remain. The ultimate goal of this work is to bring awareness to this class of problems for neuromorphic systems to the broader community, particularly to encourage further work in this area and to make sure that these applications are considered in the design of future neuromorphic systems.

More Details

AI-enhanced Codesign for Next-Generation Neuromorphic Circuits and Systems

Cardwell, Suma G.; Smith, John D.; Crowder, Douglas C.

This report details work that was completed to address the Fiscal Year 2022 Advanced Science and Technology (AS&T) Laboratory Directed Research and Development (LDRD) call for “AI-enhanced Co-Design of Next Generation Microelectronics.” This project required concurrent contributions from the fields of 1) materials science, 2) devices and circuits, 3) physics of computing, and 4) algorithms and system architectures. During this project, we developed AI-enhanced circuit design methods that relied on reinforcement learning and evolutionary algorithms. The AI-enhanced design methods were tested on neuromorphic circuit design problems that have real-world applications related to Sandia’s mission needs. The developed methods enable the design of circuits, including circuits that are built from emerging devices, and they were also extended to enable novel device discovery. We expect that these AI-enhanced design methods will accelerate progress towards developing next-generation, high-performance neuromorphic computing systems.

More Details

Neural Mini-Apps as a Tool for Neuromorphic Computing Insight

ACM International Conference Proceeding Series

Vineyard, Craig M.; Cardwell, Suma G.; Chance, Frances S.; Musuvathy, Srideep M.; Rothganger, Fredrick R.; Severa, William M.; Smith, John D.; Teeter, Corinne M.; Wang, Felix W.; Aimone, James B.

Neuromorphic computing (NMC) is an exciting paradigm seeking to incorporate principles from biological brains to enable advanced computing capabilities. Not only does this encompass algorithms, such as neural networks, but also the consideration of how to structure the enabling computational architectures for executing such workloads. Assessing the merits of NMC is more nuanced than simply comparing singular, historical performance metrics from traditional approaches versus that of NMC. The novel computational architectures require new algorithms to make use of their differing computational approaches. And neural algorithms themselves are emerging across increasing application domains. Accordingly, we propose following the example high performance computing has employed using context capturing mini-apps and abstraction tools to explore the merits of computational architectures. Here we present Neural Mini-Apps in a neural circuit tool called Fugu as a means of NMC insight.

More Details

Neuromorphic scaling advantages for energy-efficient random walk computations

Nature Electronics

Smith, John D.; Hill, Aaron J.; Reeder, Leah E.; Franke, Brian C.; Lehoucq, Richard B.; Parekh, Ojas D.; Severa, William M.; Aimone, James B.

Neuromorphic computing, which aims to replicate the computational structure and architecture of the brain in synthetic hardware, has typically focused on artificial intelligence applications. What is less explored is whether such brain-inspired hardware can provide value beyond cognitive tasks. Here we show that the high degree of parallelism and configurability of spiking neuromorphic architectures makes them well suited to implement random walks via discrete-time Markov chains. These random walks are useful in Monte Carlo methods, which represent a fundamental computational tool for solving a wide range of numerical computing tasks. Using IBM’s TrueNorth and Intel’s Loihi neuromorphic computing platforms, we show that our neuromorphic computing algorithm for generating random walk approximations of diffusion offers advantages in energy-efficient computation compared with conventional approaches. We also show that our neuromorphic computing algorithm can be extended to more sophisticated jump-diffusion processes that are useful in a range of applications, including financial economics, particle physics and machine learning.

More Details

Probabilistic Neural Circuits leveraging AI-Enhanced Codesign for Random Number Generation

Proceedings - 2022 IEEE International Conference on Rebooting Computing, ICRC 2022

Cardwell, Suma G.; Schuman, Catherine D.; Smith, John D.; Patel, Karan; Kwon, Jaesuk; Liu, Samuel; Allemang, Christopher R.; Misra, Shashank M.; Incorvia, Jean A.; Aimone, James B.

Stochasticity is ubiquitous in the world around us. However, our predominant computing paradigm is deterministic. Random number generation (RNG) can be a computationally inefficient operation in this system especially for larger workloads. Our work leverages the underlying physics of emerging devices to develop probabilistic neural circuits for RNGs from a given distribution. However, codesign for novel circuits and systems that leverage inherent device stochasticity is a hard problem. This is mostly due to the large design space and complexity of doing so. It requires concurrent input from multiple areas in the design stack from algorithms, architectures, circuits, to devices. In this paper, we present examples of optimal circuits developed leveraging AI-enhanced codesign techniques using constraints from emerging devices and algorithms. Our AI-enhanced codesign approach accelerated design and enabled interactions between experts from different areas of the micro-electronics design stack including theory, algorithms, circuits, and devices. We demonstrate optimal probabilistic neural circuits using magnetic tunnel junction and tunnel diode devices that generate an RNG from a given distribution.

More Details

CSRI Summer Proceedings 2021

Smith, John D.; Galvan, Edgar

The Computer Science Research Institute (CSRI) brings university faculty and students to Sandia National Laboratories for focused collaborative research on Department of Energy (DOE) computer and computational science problems. The institute provides an opportunity for university researches to learn about problems in computer and computational science at DOE laboratories, and help transfer results of their research to programs at the labs. Some specific CSRI research interest areas are: scalable solvers, optimization, algebraic preconditioners, graph-based, discrete, and combinatorial algorithms, uncertainty estimation, validation and verification methods, mesh generation, dynamic load-balancing, virus and other malicious-code defense, visualization, scalable cluster computers, beyond Moore’s Law computing, exascale computing tools and application design, reduced order and multiscale modeling, parallel input/output, and theoretical computer science. The CSRI Summer Program is organized by CSRI and includes a weekly seminar series and the publication of a summer proceedings.

More Details

CSRI Summer Proceedings 2021

Smith, John D.; Galvan, Edgar

The Computer Science Research Institute (CSRI) brings university faculty and students to Sandia National Laboratories for focused collaborative research on Department of Energy (DOE) computer and computational science problems. The institute provides an opportunity for university researches to learn about problems in computer and computational science at DOE laboratories, and help transfer results of their research to programs at the labs. Some specific CSRI research interest areas are: scalable solvers, optimization, algebraic preconditioners, graph-based, discrete, and combinatorial algorithms, uncertainty estimation, validation and verification methods, mesh generation, dynamic load-balancing, virus and other malicious-code defense, visualization, scalable cluster computers, beyond Moore’s Law computing, exascale computing tools and application design, reduced order and multiscale modeling, parallel input/output, and theoretical computer science. The CSRI Summer Program is organized by CSRI and includes a weekly seminar series and the publication of a summer proceedings.

More Details
Results 1–25 of 35
Results 1–25 of 35