This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledge gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?
We are interested in simulating a variety of problems in 3 dimensions (3D) featuring large electric currents. While 2D simulations have been quite informative, cylindrical symmetry may interfere with a problem’s relevant physics. Specifically, all objects in the domain behave as if they are extruded 360°—turning particles into hoops. In dealing with electrical current, this can have serious ramifications on the current pathways. In 3D (r, φ, z) currents can adjust their pathways anywhere along those 360 degrees given the right conditions; however, in 2D (r, z) those pathways can be completely choked off because an insulating hoop, rather than a particle, is present.
A method for providing non-diffuse transport of material quantities in arbitrary Lagrangian-Eulerian (ALE) dynamic solid mechanics computations is presented. ALE computations are highly desirable for simulating dynamic problems that incorporate multiple materials and large deformations. Despite the advantages of using ALE for such problems, the method is associated with diffusion of material quantities due to the advection transport step of the computational cycle. This drawback poses great difficulty for applications of material failure for which discrete features are important, but are smeared out as a result of the diffusive advection operation. The focus of this work is an ALE method that incorporates transport of variables on discrete, massless points that move with the velocity field, referred to as Lagrangian material tracers (LMT), and consequently prevents diffusion of certain material quantities of interest. A detailed description of the algorithm is provided along with discussion of its computational aspects. Simulation results include a simple proof of concept, verification using a manufactured solution, and fragmentation of a uniformly loaded thin ring that clearly demonstrates the improvement offered by the ALE LMT method.
All many-core systems require fine-grained shared memory parallelism, however the most efficient way to extract such parallelism is far from trivial. Fine-grained parallel algorithms face various performance trade-offs related to tasking, accesses to global data-structures, and use of shared cache. While programming models provide high level abstractions, such as data and task parallelism, algorithmic choices still remain open on how to best implement irregular algorithms, such as sparse factorizations, while taking into account the trade-offs mentioned above. In this paper, we compare these performance trade-offs for task and data parallelism on different hardware architectures such as Intel Sandy Bridge, Intel Xeon Phi, and IBM Power8. We do this by comparing the scaling of a new task-parallel incomplete sparse Cholesky factorization called Tacho and a new data-parallel incomplete sparse LU factorization called Basker. Both solvers utilize Kokkos programming model and were developed within the ShyLU package of Trilinos. Using these two codes we demonstrate how high-level programming changes affect performance and overhead costs on multiple multi/many-core systems. We find that Kokkos is able to provide comparable performance with both parallel-for and task/futures on traditional x86 multicores. However, the choice of which high-level abstraction to use on many-core systems depends on both the architectures and input matrices.
Remote Direct Memory Access (RDMA) is expected to be an integral communication mechanism for future exascale systems - enabling asynchronous data transfers, so that applications may fully utilize all CPU resources while simultaneously sharing data amongst remote nodes. We examined this network-induced memory contention (NiMC), the interactions between RDMA and the memory subsystem when applications and out-of-band services compete for memory resources, and NiMC's resulting impact on application-level performance. For a range of hardware technologies and HPC workloads, we quantified NiMC and show that NiMC's impact grows with scale resulting in up to 3X performance degradation at scales as small as 8K processes even in applications that previously have been shown to be performance resilient in the presence of noise. We also evaluated three potential techniques to reduce NiMC's performance impact, namely hardware offloading, core reservation and software-based network throttling. While all three of these solutions show promise, we provide guidelines that help select the best solution for a given environment.
Power will be a first-class operating constraint for Exascale computing. In order to manage power consumption of systems, measurement and control methods need to be developed. While several approaches have been developed by hardware manufacturers, they are vendor-specific and in some cases implementation-specific interfaces. Integrating all of the individual device level measurement and control functionality in a single system is a difficult task that requires system specific code. Sandia National Laboratories, in collaboration with many industry and academic partners, has developed a Power API specification, consisting of a broad range of interfaces spanning from low-level hardware to platform management and accounting. In order for many of the interfaces to be useful, especially at large scale, measurement data must be collected and control directives must be distributed in a scalable manner. This paper details the challenges of providing large scale power measurement and control and the scalable collection and control distribution architecture that is being integrated into the Power API reference implementation.
Scalable sparse LU factorization is critical for efficient numerical simulation of circuits and electrical power grids. In this work, we present a new scalable sparse direct solver called Basker. Basker introduces a new algorithm to parallelize the Gilbert-Peierls algorithm for sparse LU factorization. As architectures evolve, there exists a need for algorithms that are hierarchical in nature to match the hierarchy in thread teams, individual threads, and vector level parallelism. Basker is designed to map well to this hierarchy in architectures. There is also a need for data layouts to match multiple levels of hierarchy in memory. Basker uses a two-dimensional hierarchical structure of sparse matrices that maps to the hierarchy in the memory architectures and to the hierarchy in parallelism. We present performance evaluations of Basker on the Intel SandyBridge and Xeon Phi platforms using circuit and power grid matrices taken from the University of Florida sparse matrix collection and from Xyce circuit simulations. Basker achieves a geometric mean speedup of 5.91× on CPU (16 cores) and 7.4× on Xeon Phi (32 cores) relative to KLU. Basker outperforms Intel MKL Pardiso (PMKL) by as much as 30× on CPU (16 cores) and 7.5× on Xeon Phi (32 cores) for low fill-in circuit matrices. Furthermore, Basker provides 5.4× speedup on a challenging matrix sequence taken from an actual Xyce simulation.
Graph algorithms are challenging to parallelize on manycore architectures due to complex data dependencies and irregular memory access. We consider the well studied problem of coloring the vertices of a graph. In many applications it is important to compute a coloring with few colors in near-lineartime. In parallel, the optimistic (speculative) coloring method by Gebremedhin and Manne is the preferred approach but it needs to be modified for manycore architectures. We discuss a range of implementation issues for this vertex-based optimistic approach. We also propose a novel edge-based optimistic approach that has more parallelism and is better suited to GPUs. We study the performance empirically on two architectures(Xeon Phi and GPU) and across many data sets (from finite element problems to social networks). Our implementation uses the Kokkos library, so it is portable across platforms. We show that on GPUs, we significantly reduce the number of colors (geometric mean 4X, but up to 48X) as compared to the widely used cuSPARSE library. In addition, our edge-based algorithm is 1.5 times faster on average than cuSPARSE, where it hasspeedups up to 139X on a circuit problem. We also show the effect of the coloring on a conjugate gradient solver using multi-colored Symmetric Gauss-Seidel method as preconditioner, the higher coloring quality found by the proposed methods reduces the overall solve time up to 33% compared to cuSPARSE.
International Conference for High Performance Computing, Networking, Storage and Analysis, SC
Brown, W.M.; Semin, Andrey; Hebenstreit, Michael; Khvostov, Sergey; Raman, Karthik; Plimpton, Steven J.
Electrical power efficiency is a primary concern in designing modern HPC systems. Common strategies to improve CPU power efficiency rely on increased parallelism within a processor that is enabled both by an increase in the vector capabilities within the core and also the number of cores within a processor. Although many-core processors have been available for some time, achieving power-efficient performance has been challenging due to the offload model. Here, we evaluate performance of the molecular dynamics code LAMMPS on two new Intel® processors including the second generation many-core Intel® Xeon Phi™ processor that is available as a bootable CPU. We describe our approach to measure power consumption out-of-band and software optimizations necessary to achieve energy efficiency. We analyze benefits from Intel® Advanced Vector Extensions 512 instructions and demonstrate increased simulations rates with over 9X the CPU+DRAM power efficiency when compared to the unoptimized code on previous generation processors.
In next-generation extreme-scale systems, application performance will be limited by memory performance characteristics. The first exascale system is projected to contain many petabytes of memory. In addition to the sheer volume of the memory required, device trends, such as shrinking feature sizes and reduced supply voltages, have the potential to increase the frequency of memory errors. As a result, resilience to memory errors is a key challenge. In this paper, we evaluate the viability of using memory compression to repair detectable uncorrectable errors (DUEs) in memory. We develop a software library, evaluate its performance and demonstrate that it is able to significantly compress memory of HPC applications. Further, we show that exploiting compressed memory pages to correct memory errors can significantly improve application performance on next-generation systems.
Peridynamics is a nonlocal extension of classical continuum mechanics that is well-suited for solving problems with discontinuities such as cracks. This paper extends the peridynamic formulation to decompose a problem domain into a number of smaller overlapping subdomains and to enable the use of different time steps in different subdomains. This approach allows regions of interest to be isolated and solved at a small time step for increased accuracy while the rest of the problem domain can be solved at a larger time step for greater computational efficiency. Performance of the proposed method in terms of stability, accuracy, and computational cost is examined and several numerical examples are presented to corroborate the findings.