Publications

Results 926–950 of 9,998

Search results

Jump to search filters

Low-Power Deep Learning Inference using the SpiNNaker Neuromorphic Platform

Vineyard, Craig M.; Dellana, Ryan A.; Aimone, James B.; Severa, William M.

n this presentation we will discuss recent results on using the SpiNNaker neuromorphic platform (48-chip model) for deep learning neural network inference. We use the Sandia Labs developed Whet stone spiking deep learning library to train deep multi-layer perceptrons and convolutional neural networks suitable for the spiking substrate on the neural hardware architecture. By using the massively parallel nature of SpiNNaker, we are able to achieve, under certain network topologies, substantial network tiling and consequentially impressive inference throughput. Such high-throughput systems may have eventual application in remote sensing applications where large images need to be chipped, scanned, and processed quickly. Additionally, we explore complex topologies that push the limits of the SpiNNaker routing hardware and investigate how that impacts mapping software-implemented networks to on-hardware instantiations.

More Details

A Model for Atomic Precision p-Type Doping with Diborane on Si(100)-2×1

Journal of Physical Chemistry C

Campbell, Quinn C.; Ivie, Jeffrey A.; Bussmann, Ezra B.; Schmucker, Scott W.; Baczewski, Andrew D.; Misra, Shashank M.

Diborane (B2H6) is a promising molecular precursor for atomic precision p-type doping of silicon that has recently been experimentally demonstrated [ Škereň et al. Nat. Electron. 2020 ]. We use density functional theory (DFT) calculations to determine the reaction pathway for diborane dissociating into a species that will incorporate as electrically active substitutional boron after adsorbing onto the Si(100)-2×1 surface. Our calculations indicate that diborane must overcome an energy barrier to adsorb, explaining the experimentally observed low sticking coefficient (<1 × 10-4 at room temperature) and suggesting that heating can be used to increase the adsorption rate. Upon sticking, diborane has an ≈50% chance of splitting into two BH3 fragments versus merely losing hydrogen to form a dimer such as B2H4. As boron dimers are likely electrically inactive, whether this latter reaction occurs is shown to be predictive of the incorporation rate. The dissociation process proceeds with significant energy barriers, necessitating the use of high temperatures for incorporation. Using the barriers calculated from DFT, we parameterize a Kinetic Monte Carlo model that predicts the incorporation statistics of boron as a function of the initial depassivation geometry, dose, and anneal temperature. Our results suggest that the dimer nature of diborane inherently limits its doping density as an acceptor precursor and furthermore that heating the boron dimers to split before exposure to silicon can lead to poor selectivity on hydrogen and halogen resists. This suggests that, while diborane works as an atomic precision acceptor precursor, other non-dimerized acceptor precursors may lead to higher incorporation rates at lower temperatures.

More Details

SST-GPU: A Scalable SST GPU Component for Performance Modeling and Profiling

Hughes, Clayton H.; Hammond, Simon D.; Zhang, Mengchi; Liu, Yechen; Rogers, Tim; Hoekstra, Robert J.

Programmable accelerators have become commonplace in modern computing systems. Advances in programming models and the availability of unprecedented amounts of data have created a space for massively parallel accelerators capable of maintaining context for thousands of concurrent threads resident on-chip. These threads are grouped and interleaved on a cycle-by-cycle basis among several massively parallel computing cores. One path for the design of future supercomputers relies on an ability to model the performance of these massively parallel cores at scale. The SST framework has been proven to scale up to run simulations containing tens of thousands of nodes. A previous report described the initial integration of the open-source, execution-driven GPU simulator, GPGPU-Sim, into the SST framework. This report discusses the results of the integration and how to use the new GPU component in SST. It also provides examples of what it can be used to analyze and a correlation study showing how closely the execution matches that of a Nvidia V100 GPU when running kernels and mini-apps.

More Details

Negative Perceptions About the Applicability of Source-to-Source Compilers in HPC: A Literature Review

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Milewicz, Reed M.; Pirkelbauer, Peter; Soundararajan, Prema; Ahmed, Hadia; Skjellum, Tony

A source-to-source compiler is a type of translator that accepts the source code of a program written in a programming language as its input and produces an equivalent source code in the same or different programming language. S2S techniques are commonly used to enable fluent translation between high-level programming languages, to perform large-scale refactoring operations, and to facilitate instrumentation for dynamic analysis. Negative perceptions about S2S’s applicability in High Performance Computing (HPC) are studied and evaluated here. This is a first study that brings to light reasons why scientists do not use source-to-source techniques for HPC. The primary audience for this paper are those considering S2S technology in their HPC application work.

More Details

Solving Stochastic Inverse Problems for Property–Structure Linkages Using Data-Consistent Inversion and Machine Learning

JOM

Laros, James H.; Wildey, Timothy M.

Determining process–structure–property linkages is one of the key objectives in material science, and uncertainty quantification plays a critical role in understanding both process–structure and structure–property linkages. In this work, we seek to learn a distribution of microstructure parameters that are consistent in the sense that the forward propagation of this distribution through a crystal plasticity finite element model matches a target distribution on materials properties. This stochastic inversion formulation infers a distribution of acceptable/consistent microstructures, as opposed to a deterministic solution, which expands the range of feasible designs in a probabilistic manner. To solve this stochastic inverse problem, we employ a recently developed uncertainty quantification framework based on push-forward probability measures, which combines techniques from measure theory and Bayes’ rule to define a unique and numerically stable solution. This approach requires making an initial prediction using an initial guess for the distribution on model inputs and solving a stochastic forward problem. To reduce the computational burden in solving both stochastic forward and stochastic inverse problems, we combine this approach with a machine learning Bayesian regression model based on Gaussian processes and demonstrate the proposed methodology on two representative case studies in structure–property linkages.

More Details

Asymptotically compatible reproducing kernel collocation and meshfree integration for nonlocal diffusion

SIAM Journal on Numerical Analysis

Leng, Yu; Tian, Xiaochuan; Trask, Nathaniel A.; Foster, John T.

Reproducing kernel (RK) approximations are meshfree methods that construct shape functions from sets of scattered data. We present an asymptotically compatible (AC) RK collocation method for nonlocal diffusion models with Dirichlet boundary condition. The numerical scheme is shown to be convergent to both nonlocal diffusion and its corresponding local limit as nonlocal interaction vanishes. The analysis is carried out on a special family of rectilinear Cartesian grids for a linear RK method with designed kernel support. The key idea for the stability of the RK collocation scheme is to compare the collocation scheme with the standard Galerkin scheme, which is stable. In addition, assembling the stiffness matrix of the nonlocal problem requires costly computational resources because high-order Gaussian quadrature is necessary to evaluate the integral. We thus provide a remedy to the problem by introducing a quasi-discrete nonlocal diffusion operator for which no numerical quadrature is further needed after applying the RK collocation scheme. The quasi-discrete nonlocal diffusion operator combined with RK collocation is shown to be convergent to the correct local diffusion problem by taking the limits of nonlocal interaction and spatial resolution simultaneously. The theoretical results are then validated with numerical experiments. We additionally illustrate a connection between the proposed technique and an existing optimization based approach based on generalized moving least squares.

More Details

HIERARCHICAL PARALLELISM FOR TRANSIENT SOLID MECHANICS SIMULATIONS

World Congress in Computational Mechanics and ECCOMAS Congress

Littlewood, David J.; Jones, Reese E.; Laros, James H.; Plews, Julia A.; Hetmaniuk, Ulrich L.; Lifflander, Jonathan

Software development for high-performance scientific computing continues to evolve in response to increased parallelism and the advent of on-node accelerators, in particular GPUs. While these hardware advancements have the potential to significantly reduce turnaround times, they also present implementation and design challenges for engineering codes. We investigate the use of two strategies to mitigate these challenges: the Kokkos library for performance portability across disparate architectures, and the DARMA/vt library for asynchronous many-task scheduling. We investigate the application of Kokkos within the NimbleSM finite element code and the LAMÉ constitutive model library. We explore the performance of DARMA/vt applied to NimbleSM contact mechanics algorithms. Software engineering strategies are discussed, followed by performance analyses of relevant solid mechanics simulations which demonstrate the promise of Kokkos and DARMA/vt for accelerated engineering simulators.

More Details

2D Microstructure Reconstruction for SEM via Non-local Patch-Based Image Inpainting

Minerals, Metals and Materials Series

Laros, James H.; Tran, Hoang

Microstructure reconstruction is a long-standing problem in experimental and computational materials science, for which numerous attempts have been made to solve. However, the majority of approaches often treats microstructure as discrete phases, which, in turn, reduces the quality of the resulting microstructures and limits its usage to the computational level of fidelity, but not the experimental level of fidelity. In this work, we applied our previously proposed approach [41] to generate synthetic microstructure images at the experimental level of fidelity for the UltraHigh Carbon Steel DataBase (UHCSDB) [13].

More Details

Randomized sketching algorithms for low-memory dynamic optimization

SIAM Journal on Optimization

Kouri, Drew P.; Muthukumar, Ramchandran; Udell, Madeleine

This paper develops a novel limited-memory method to solve dynamic optimization problems. The memory requirements for such problems often present a major obstacle, particularly for problems with PDE constraints such as optimal flow control, full waveform inversion, and optical tomography. In these problems, PDE constraints uniquely determine the state of a physical system for a given control; the goal is to find the value of the control that minimizes an objective. While the control is often low dimensional, the state is typically more expensive to store. This paper suggests using randomized matrix approximation to compress the state as it is generated and shows how to use the compressed state to reliably solve the original dynamic optimization problem. Concretely, the compressed state is used to compute approximate gradients and to apply the Hessian to vectors. The approximation error in these quantities is controlled by the target rank of the sketch. This approximate first- and second-order information can readily be used in any optimization algorithm. As an example, we develop a sketched trust-region method that adaptively chooses the target rank using a posteriori error information and provably converges to a stationary point of the original problem. Numerical experiments with the sketched trust-region method show promising performance on challenging problems such as the optimal control of an advection-reaction-diffusion equation and the optimal control of fluid flow past a cylinder.

More Details

ADELUS: A Performance-Portable Dense LU Solver for Distributed-Memory Hardware-Accelerated Systems

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Dang, Vinh Q.; Kotulski, J.D.; Rajamanickam, Sivasankaran R.

Solving dense systems of linear equations is essential in applications encountered in physics, mathematics, and engineering. This paper describes our current efforts toward the development of the ADELUS package for current and next generation distributed, accelerator-based, high-performance computing platforms. The package solves dense linear systems using partial pivoting LU factorization on distributed-memory systems with CPUs/GPUs. The matrix is block-mapped onto distributed memory on CPUs/GPUs and is solved as if it was torus-wrapped for an optimal balance of computation and communication. A permutation operation is performed to restore the results so the torus-wrap distribution is transparent to the user. This package targets performance portability by leveraging the abstractions provided in the Kokkos and Kokkos Kernels libraries. Comparison of the performance gains versus the state-of-the-art SLATE and DPLASMA GESV functionalities on the Summit supercomputer are provided. Preliminary performance results from large-scale electromagnetic simulations using ADELUS are also presented. The solver achieves 7.7 Petaflops on 7600 GPUs of the Sierra supercomputer translating to 16.9% efficiency.

More Details

An asymptotically compatible approach for Neumann-type boundary condition on nonlocal problems

ESAIM: Mathematical Modelling and Numerical Analysis

You, Huaiqian; Lu, Xin Y.; Trask, Nathaniel A.; Yu, Yue

In this paper we consider 2D nonlocal diffusion models with a finite nonlocal horizon parameter δ characterizing the range of nonlocal interactions, and consider the treatment of Neumann-like boundary conditions that have proven challenging for discretizations of nonlocal models. We propose a new generalization of classical local Neumann conditions by converting the local flux to a correction term in the nonlocal model, which provides an estimate for the nonlocal interactions of each point with points outside the domain. While existing 2D nonlocal flux boundary conditions have been shown to exhibit at most first order convergence to the local counter part as δ → 0, the proposed Neumann-type boundary formulation recovers the local case as O(δ2) in the L∞(ω) norm, which is optimal considering the O(δ2) convergence of the nonlocal equation to its local limit away from the boundary. We analyze the application of this new boundary treatment to the nonlocal diffusion problem, and present conditions under which the solution of the nonlocal boundary value problem converges to the solution of the corresponding local Neumann problem as the horizon is reduced. To demonstrate the applicability of this nonlocal flux boundary condition to more complicated scenarios, we extend the approach to less regular domains, numerically verifying that we preserve second-order convergence for non-convex domains with corners. Based on the new formulation for nonlocal boundary condition, we develop an asymptotically compatible meshfree discretization, obtaining a solution to the nonlocal diffusion equation with mixed boundary conditions that converges with O(δ2) convergence.

More Details

A block coordinate descent optimizer for classification problems exploiting convexity

CEUR Workshop Proceedings

Patel, Ravi G.; Trask, Nathaniel A.; Gulian, Mamikon G.; Cyr, Eric C.

Second-order optimizers hold intriguing potential for deep learning, but suffer from increased cost and sensitivity to the non-convexity of the loss surface as compared to gradient-based approaches. We introduce a coordinate descent method to train deep neural networks for classification tasks that exploits global convexity of the cross-entropy loss in the weights of the linear layer. Our hybrid Newton/Gradient Descent (NGD) method is consistent with the interpretation of hidden layers as providing an adaptive basis and the linear layer as providing an optimal fit of the basis to data. By alternating between a second-order method to find globally optimal parameters for the linear layer and gradient descent to train the hidden layers, we ensure an optimal fit of the adaptive basis to data throughout training. The size of the Hessian in the second-order step scales only with the number weights in the linear layer and not the depth and width of the hidden layers; furthermore, the approach is applicable to arbitrary hidden layer architecture. Previous work applying this adaptive basis perspective to regression problems demonstrated significant improvements in accuracy at reduced training cost, and this work can be viewed as an extension of this approach to classification problems. We first prove that the resulting Hessian matrix is symmetric semi-definite, and that the Newton step realizes a global minimizer. By studying classification of manufactured two-dimensional point cloud data, we demonstrate both an improvement in validation error and a striking qualitative difference in the basis functions encoded in the hidden layer when trained using NGD. Application to image classification benchmarks for both dense and convolutional architectures reveals improved training accuracy, suggesting gains of second-order methods over gradient descent. A Tensorflow implementation of the algorithm is available at github.com/rgp62/.

More Details

DPM: A Novel Training Method for Physics-Informed Neural Networks in Extrapolation

35th AAAI Conference on Artificial Intelligence, AAAI 2021

Kim, Jungeun; Lee, Kookjin L.; Lee, Dongeun; Jhin, Sheo Y.; Park, Noseong

We present a method for learning dynamics of complex physical processes described by time-dependent nonlinear partial differential equations (PDEs). Our particular interest lies in extrapolating solutions in time beyond the range of temporal domain used in training. Our choice for a baseline method is physics-informed neural network (PINN) because the method parameterizes not only the solutions, but also the equations that describe the dynamics of physical processes. We demonstrate that PINN performs poorly on extrapolation tasks in many benchmark problems. To address this, we propose a novel method for better training PINN and demonstrate that our newly enhanced PINNs can accurately extrapolate solutions in time. Our method shows up to 72% smaller errors than existing methods in terms of the standard L2-norm metric.

More Details

Dakota-NAERM Integration

Swiler, Laura P.; Newman, Sarah; Staid, Andrea S.; Barrett, Emily

This report presents the results of a collaborative effort under the Verification, Validation, and Uncertainty Quantification (VVUQ) thrust area of the North American Energy Resilience Model (NAERM) program. The goal of the effort described in this report was to integrate the Dakota software with the NAERM software framework to demonstrate sensitivity analysis of a co-simulation for NAERM.

More Details

A fast solver for the fractional helmholtz equation

SIAM Journal on Scientific Computing

Glusa, Christian A.; D'Elia, Marta D.; Antil, Harbir; Weiss, Chester J.; van Bloemen Waanders, Bart G.

The purpose of this paper is to study a Helmholtz problem with a spectral fractional Laplacian, instead of the standard Laplacian. Recently, it has been established that such a fractional Helmholtz problem better captures the underlying behavior in geophysical electromagnetics. We establish the well-posedness and regularity of this problem. We introduce a hybrid spectral-finite element approach to discretize it and show well-posedness of the discrete system. In addition, we derive a priori discretization error estimates. Finally, we introduce an efficient solver that scales as well as the best possible solver for the classical integer-order Helmholtz equation. We conclude with several illustrative examples that confirm our theoretical findings.

More Details

Scalable3-BO: Big data meets HPC - A scalable asynchronous parallel high-dimensional Bayesian optimization framework on supercomputers

Proceedings of the ASME Design Engineering Technical Conference

Laros, James H.

Bayesian optimization (BO) is a flexible and powerful framework that is suitable for computationally expensive simulation-based applications and guarantees statistical convergence to the global optimum. While remaining as one of the most popular optimization methods, its capability is hindered by the size of data, the dimensionality of the considered problem, and the nature of sequential optimization. These scalability issues are intertwined with each other and must be tackled simultaneously. In this work, we propose the Scalable3-BO framework, which employs sparse GP as the underlying surrogate model to scope with Big Data and is equipped with a random embedding to efficiently optimize high-dimensional problems with low effective dimensionality. The Scalable3-BO framework is further leveraged with asynchronous parallelization feature, which fully exploits the computational resource on HPC within a computational budget. As a result, the proposed Scalable3-BO framework is scalable in three independent perspectives: with respect to data size, dimensionality, and computational resource on HPC. The goal of this work is to push the frontiers of BO beyond its well-known scalability issues and minimize the wall-clock waiting time for optimizing high-dimensional computationally expensive applications. We demonstrate the capability of Scalable3-BO with 1 million data points, 10,000-dimensional problems, with 20 concurrent workers in an HPC environment.

More Details

Risk-averse control of fractional diffusion with uncertain exponent

SIAM Journal on Control and Optimization

Kouri, Drew P.; Antil, Harbir; Pfefferer, Johannes

In this paper, we introduce and analyze a new class of optimal control problems constrained by elliptic equations with uncertain fractional exponents. We utilize risk measures to formulate the resulting optimization problem. We develop a functional analytic framework, study the existence of solution, and rigorously derive the first-order optimality conditions. Additionally, we employ a sample-based approximation for the uncertain exponent and the finite element method to discretize in space. We prove the rate of convergence for the optimal risk neutral controls when using quadrature approximation for the uncertain exponent and conclude with illustrative examples.

More Details
Results 926–950 of 9,998
Results 926–950 of 9,998