In this work, we study reproducing kernel (RK) collocation method for peridynamic Navier equation. In the first part, we apply a linear RK approximation to both displacement and dilatation, and then back-substitute dilatation and solve the peridynamic Navier equation in a pure displacement form. The RK collocation scheme converges to the nonlocal limit for a fixed nonlocal interaction length and also to the local limit as nonlocal interactions vanish. The stability is shown by comparing the collocation scheme with the standard Galerkin scheme using Fourier analysis. In the second part, we apply the RK collocation to the quasi-discrete peridynamic Navier equation and show its convergence to the correct local limit when the ratio between the nonlocal length scale and the discretization parameter is fixed. The analysis is carried out on a special family of rectilinear Cartesian grids for the RK collocation method with a designated kernel with finite support. We assume the Lamé parameters satisfy λ≥μ to avoid extra assumptions on the nonlocal kernel. Finally, numerical experiments are conducted to validate the theoretical results.
Understanding the effects of contaminant plasmas generated within the Z machine at Sandia is critical to understanding current loss mechanisms. The plasmas are generated at the accelerator electrode surfaces and include desorbed species found in the surface and substrate of the walls. These desorbed species can become ionized. The timing and location of contaminant species desorbed from the wall surface depend non-linearly on the local surface temperature. For accurate modeling, it is necessary to utilize wall heating models to estimate the amount and timing of material desorption. One of these heating mechanisms is Joule heating. We propose several extended semi-analytic magnetic diffusion heating models for computing surface Joule heating and demonstrate their effects for several representative current histories. We quantitatively assess under what circumstances these extensions to classical formulas may provide a validatable improvement to the understanding of contaminant desorption timing.
PyGSTi is a Python software package for assessing and characterizing the performance of quantum computing processors. It can be used as a standalone application, or as a library, to perform a wide variety of quantum characterization, verification, and validation (QCVV) protocols on as-built quantum processors. We outline pyGSTi's structure, and what it can do, using multiple examples. We cover its main characterization protocols with end-to-end implementations. These include gate set tomography, randomized benchmarking on one or many qubits, and several specialized techniques. We also discuss and demonstrate how power users can customize pyGSTi and leverage its components to create specialized QCVV protocols and solve user-specific problems.
This work presents the design of nonlinear stabilization techniques for the finite element discretization of Euler equations in both steady and transient form. Implicit time integration is used in the case of the transient form. A differentiable local bounds preserving method has been developed, which combines a Rusanov artificial diffusion operator and a differentiable shock detector. Nonlinear stabilization schemes are usually stiff and highly nonlinear. This issue is mitigated by the differentiability properties of the proposed method. Moreover, in order to further improve the nonlinear convergence, we also propose a continuation method for a subset of the stabilization parameters. The resulting method has been successfully applied to steady and transient problems with complex shock patterns. Numerical experiments show that it is able to provide sharp and well resolved shocks. The importance of the differentiability is assessed by comparing the new scheme with its non-differentiable counterpart. Numerical experiments suggest that, for up to moderate nonlinear tolerances, the method exhibits improved robustness and nonlinear convergence behavior for steady problems. In the case of transient problem, we also observe a reduction in the computational cost.
With many recent advances in interconnect technologies and memory interfaces, disaggregated memory systems are approaching industrial adoption. For instance, the recent Gen-Z consortium focuses on a new memory semantic protocol that enables fabric-attached memories (FAM), where the memory and other compute units can be directly attached to fabric interconnects. Decoupling of memory from compute units becomes a feasible option as the rate of data transfer increases due to the emergence of novel interconnect technologies, such as Silicon Photonic Interconnects. Disaggregated memories not only enable more efficient use of capacity (minimizes under-utilization) they also allow easy integration of evolving technologies. Additionally, they simplify the programming model at the same time allowing efficient sharing of data. However, the latency of accessing the data in these Fabric Attached disaggregated Memories (FAMs) is dependent on the latency imposed by the fabric interfaces. To reduce memory access latency and to improve the performance of FAM systems, in this paper, we explore techniques to prefetch data from FAMs to the local memory present in the node (PreFAM). We realize that since the memory access latency is high in FAMs, prefetching a cache block (64 bytes) from FAM can be inefficient, since the possibility of issuing demand requests before the completion of prefetch requests, to the same FAM locations, is high. Hence, we explore predicting and prefetching FAM blocks at a distance; prefetching blocks which are going to be accessed in future but not immediately. We show that, with prefetching, the performance of FAM architectures increases by 38.84%, while memory access latency is improved by 39.6%, with only 17.65% increase in the number of accesses to the FAM, on average. Further, by prefetching at a distance we show a performance improvement of 72.23%.
We employ a fully charge self-consistent quantum transport formalism, together with a heuristic elastic scattering model, to study the local density of state (LDOS) and the conductive properties of Si:P δ-layer wires at the cryogenic temperature of 4 K. The simulations allow us to explain the origin of shallow conducting sub-bands, recently observed in high resolution angle-resolved photoemission spectroscopy experiments. Our LDOS analysis shows the free electrons are spatially separated in layers with different average kinetic energies, which, along with elastic scattering, must be accounted for to reproduce the sheet resistance values obtained over a wide range of the δ-layer donor densities.
We present a Physics-Informed Graph Neural Network (pigNN) methodology for rapid and automated compact model development. It brings together the inherent strengths of data-driven machine learning, high-fidelity physics in TCAD simulations, and knowledge contained in existing compact models. In this work, we focus on developing a neural network (NN) based compact model for a non-ideal PN diode that represents one nonlinear edge in a pigNN graph. This model accurately captures the smooth transition between the exponential and quasi-linear response regions. By learning voltage dependent non-ideality factor using NN and employing an inverse response function in the NN loss function, the model also accurately captures the voltage dependent recombination effect. This NN compact model serves as basis model for a PN diode that can be a single device or represent an isolated diode in a complex device determined by topological data analysis (TDA) methods. The pigNN methodology is also applicable to derive reduced order models in other engineering areas.
One big challenge of the emerging atomic precision advanced manufacturing (APAM) technology for microelectronics application is to realize APAM devices that operate at room temperature (RT). We demonstrate that semiclassical technology computer aided design (TCAD) device simulation tool can be employed to understand current leakage and improve APAM device design for RT operation. To establish the applicability of semiclassical simulation, we first show that a semiclassical impurity scattering model with the Fermi-Dirac statistics can explain the very low mobility in APAM devices quite well; we also show semiclassical TCAD reproduces measured sheet resistances when proper mobility values are used. We then apply semiclassical TCAD to simulate current leakage in realistic APAM wires. With insights from modeling, we were able to improve device design, fabricate Hall bars, and demonstrate RT operation for the very first time.
Canonical Polyadic tensor decomposition using alternate Poisson regression (CP-APR) is an effective analysis tool for large sparse count datasets. One of the variants using projected damped Newton optimization for row subproblems (PDNR) offers quadratic convergence and is amenable to parallelization. Despite its potential effectiveness, PDNR performance on modern high performance computing (HPC) systems is not well understood. To remedy this, we have developed a parallel implementation of PDNR using Kokkos, a performance portable parallel programming framework supporting efficient runtime of a single code base on multiple HPC systems. We demonstrate that the performance of parallel PDNR can be poor if load imbalance associated with the irregular distribution of nonzero entries in the tensor data is not addressed. Preliminary results using tensors from the FROSTT data set indicate that using multiple kernels to address this imbalance when solving the PDNR row subproblems in parallel can improve performance, with up to 80% speedup on CPUs and 10-fold speedup on NVIDIA GPUs.
Tensor decomposition models play an increasingly important role in modern data science applications. One problem of particular interest is fitting a low-rank Canonical Polyadic (CP) tensor decomposition model when the tensor has sparse structure and the tensor elements are nonnegative count data. SparTen is a high-performance C++ library which computes a low-rank decomposition using different solvers: a first-order quasi-Newton or a second-order damped Newton method, along with the appropriate choice of runtime parameters. Since default parameters in SparTen are tuned to experimental results in prior published work on a single real-world dataset conducted using MATLAB implementations of these methods, it remains unclear if the parameter defaults in SparTen are appropriate for general tensor data. Furthermore, it is unknown how sensitive algorithm convergence is to changes in the input parameter values. This report addresses these unresolved issues with large-scale experimentation on three benchmark tensor data sets. Experiments were conducted on several different CPU architectures and replicated with many initial states to establish generalized profiles of algorithm convergence behavior.
Intuition tells us that a rolling or spinning sphere will eventually stop due to the presence of friction and other dissipative interactions. The resistance to rolling and spinning or twisting torque that stops a sphere also changes the microstructure of a granular packing of frictional spheres by increasing the number of constraints on the degrees of freedom of motion. We perform discrete element modeling simulations to construct sphere packings implementing a range of frictional constraints under a pressure-controlled protocol. Mechanically stable packings are achievable at volume fractions and average coordination numbers as low as 0.53 and 2.5, respectively, when the particles experience high resistance to sliding, rolling, and twisting. Only when the particle model includes rolling and twisting friction were experimental volume fractions reproduced.
In this paper, we consider the optimal control of semilinear elliptic PDEs with random inputs. These problems are often nonconvex, infinite-dimensional stochastic optimization problems for which we employ risk measures to quantify the implicit uncertainty in the objective function. In contrast to previous works in uncertainty quantification and stochastic optimization, we provide a rigorous mathematical analysis demonstrating higher solution regularity (in stochastic state space), continuity and differentiability of the control-to-state map, and existence, regularity and continuity properties of the control-to-adjoint map. Our proofs make use of existing techniques from PDE-constrained optimization as well as concepts from the theory of measurable multifunctions. We illustrate our theoretical results with two numerical examples motivated by the optimal doping of semiconductor devices.
GPUs are now a fundamental accelerator for many high-performance computing applications. They are viewed by many as a technology facilitator for the surge in fields like machine learning and Convolutional Neural Networks. To deliver the best performance on a GPU, we need to create monitoring tools to ensure that we optimize the code to get the most performance and efficiency out of a GPU. Since NVIDIA GPUs are currently the most commonly implemented in HPC applications and systems, NVIDIA tools are the solution for performance monitoring. The Light-Weight Distributed Metric System (LDMS) at Sandia is an infrastructure widely adopted for large-scale systems and application monitoring. Sandia has developed CPU application monitoring capability within LDMS. Therefore, we chose to develop a GPU monitoring capability within the same framework. In this report, we discuss the current limitations in the NVIDIA monitoring tools, how we overcame such limitations, and present an overview of the tool we built to monitor GPU performance in LDMS and its capabilities. Also, we discuss our current validation results. Most of the performance counter results are the same in both vendor tools and our tool when using LDMS to collect these results. Furthermore, our tool provides these statistics during the entire runtime of the tool as a time series and not just aggregate statistics at the end of the application run. This allows the user to see the progress of the behavior of the applications during their lifetime.
Analog hardware accelerators, which perform computation within a dense memory array, have the potential to overcome the major bottlenecks faced by digital hardware for data-heavy workloads such as deep learning. Exploiting the intrinsic computational advantages of memory arrays, however, has proven to be challenging principally due to the overhead imposed by the peripheral circuitry and due to the non-ideal properties of memory devices that play the role of the synapse. We review the existing implementations of these accelerators for deep supervised learning, organizing our discussion around the different levels of the accelerator design hierarchy, with an emphasis on circuits and architecture. We explore and consolidate the various approaches that have been proposed to address the critical challenges faced by analog accelerators, for both neural network inference and training, and highlight the key design trade-offs underlying these techniques.
We provide a comprehensive overview of mixed-integer programming formulations for the unit commitment (UC) problem. UC formulations have been an especially active area of research over the past 12 years due to their practical importance in power grid operations, and this paper serves as a capstone for this line of work. We additionally provide publicly available reference implementations of all formulations examined. We computationally test existing and novel UC formulations on a suite of instances drawn from both academic and real-world data sources. Driven by our computational experience from this and previous work, we contribute some additional formulations for both generator production upper bounds and piecewise linear production costs. By composing new UC formulations using existing components found in the literature and new components introduced in this paper, we demonstrate that performance can be significantly improved—and in the process, we identify a new state-of-the-art UC formulation.