Publications

Results 1–25 of 210
Skip to search filters

A distributed-memory hierarchical solver for general sparse linear systems

Parallel Computing

Chen, Chao; Pouransari, Hadi; Rajamanickam, Sivasankaran R.; Boman, Erik G.; Darve, Eric

We present a parallel hierarchical solver for general sparse linear systems on distributed-memory machines. For large-scale problems, this fully algebraic algorithm is faster and more memory-efficient than sparse direct solvers because it exploits the low-rank structure of fill-in blocks. Depending on the accuracy of low-rank approximations, the hierarchical solver can be used either as a direct solver or as a preconditioner. The parallel algorithm is based on data decomposition and requires only local communication for updating boundary data on every processor. Moreover, the computation-to-communication ratio of the parallel algorithm is approximately the volume-to-surface-area ratio of the subdomain owned by every processor. We present various numerical results to demonstrate the versatility and scalability of the parallel algorithm.

More Details

A nested dissection approach to sparse matrix partitioning for parallel computations

Proposed for publication in SIAM Journal on Scientific Computing.

Boman, Erik G.

We consider how to distribute sparse matrices among processes to reduce communication costs in parallel sparse matrix computations, specifically, sparse matrix-vector multiplication. Our main contributions are: (i) an exact graph model for communication with general (two-dimensional) matrix distribution, and (ii) a recursive partitioning algorithm based on nested dissection (substructuring). We show that the communication volume is closely linked to vertex separators. We have implemented our algorithm using hypergraph partitioning software to enable a fair comparison with existing methods. We present numerical results for sparse matrices from several application areas, with up to 9 million nonzeros. The results show that our new approach is superior to traditional 1d partitioning and comparable to a current leading partitioning method, the finegrain hypergraph method, in terms of communication volume. Our nested dissection method has two advantages over the fine-grain method: it is faster to compute, and the resulting distribution requires fewer communication messages.

More Details

A robust hierarchical solver for ill-conditioned systems with applications to ice sheet modeling

Journal of Computational Physics

Chen, Chao; Cambier, Leopold; Boman, Erik G.; Rajamanickam, Sivasankaran R.; Tuminaro, Raymond S.; Darve, Eric

A hierarchical solver is proposed for solving sparse ill-conditioned linear systems in parallel. The solver is based on a modification of the LoRaSp method, but employs a deferred-compression technique, which provably reduces the approximation error and significantly improves efficiency. Moreover, the deferred-compression technique introduces minimal overhead and does not affect parallelism. As a result, the new solver achieves linear computational complexity under mild assumptions and excellent parallel scalability. To demonstrate the performance of the new solver, we focus on applying it to solve sparse linear systems arising from ice sheet modeling. The strong anisotropic phenomena associated with the thin structure of ice sheets creates serious challenges for existing solvers. To address the anisotropy, we additionally developed a customized partitioning scheme for the solver, which captures the strong-coupling direction accurately. In general, the partitioning can be computed algebraically with existing software packages, and thus the new solver is generalizable for solving other sparse linear systems. Our results show that ice sheet problems of about 300 million degrees of freedom have been solved in just a few minutes using 1024 processors.

More Details

A scalable parallel graph coloring algorithm for distributed memory computers

Lecture Notes in Computer Science

Boman, Erik G.; Bozdaǧ, Doruk; Catalyurek, Umit; Gebremedhin, Assefaw H.; Manne, Fredrik

In large-scale parallel applications a graph coloring is often carried out to schedule computational tasks. In this paper, we describe a new distributed-memory algorithm for doing the coloring itself in parallel. The algorithm operates in an iterative fashion; in each round vertices are speculatively colored based on limited information, and then a set of incorrectly colored vertices, to be recolored in the next round, is identified. Parallel speedup is achieved in part by reducing the frequency of communication among processors. Experimental results on a PC cluster using up to 16 processors show that the algorithm is scalable. © Springer-Verlag Berlin Heidelberg 2005.

More Details

Advances in Mixed Precision Algorithms: 2021 Edition

Abdelfattah, Ahmad A.; Anzt, Hartwig A.; Ayala, Alan A.; Boman, Erik G.; Carson, Erin C.; Cayrols, Sebastien C.; Cojean, Terry C.; Dongarra, Jack D.; Falgout, Rob F.; Gates, Mark G.; Gr\"{u}tzmacher, Thomas G.; Higham, Nicholas J.; Kruger, Scott E.; Li, Sherry L.; Lindquist, Neil L.; Liu, Yang L.; Loe, Jennifer A.; Nayak, Pratik N.; Osei-Kuffuor, Daniel O.; Pranesh, Sri P.; Rajamanickam, Sivasankaran R.; Ribizel, Tobias R.; Smith, Bryce B.; Swirydowicz, Kasia S.; Thomas, Stephen T.; Tomov, Stanimire T.; M. Tsai, Yaohung M.; Yamazaki, Ichitaro Y.; Yang, Urike M.

Over the last year, the ECP xSDK-multiprecision effort has made tremendous progress in developing and deploying new mixed precision technology and customizing the algorithms for the hardware deployed in the ECP flagship supercomputers. The effort also has succeeded in creating a cross-laboratory community of scientists interested in mixed precision technology and now working together in deploying this technology for ECP applications. In this report, we highlight some of the most promising and impactful achievements of the last year. Among the highlights we present are: Mixed precision IR using a dense LU factorization and achieving a 1.8× speedup on Spock; results and strategies for mixed precision IR using a sparse LU factorization; a mixed precision eigenvalue solver; Mixed Precision GMRES-IR being deployed in Trilinos, and achieving a speedup of 1.4× over standard GMRES; compressed Basis (CB) GMRES being deployed in Ginkgo and achieving an average 1.4× speedup over standard GMRES; preparing hypre for mixed precision execution; mixed precision sparse approximate inverse preconditioners achieving an average speedup of 1.2×; and detailed description of the memory accessor separating the arithmetic precision from the memory precision, and enabling memory-bound low precision BLAS 1/2 operations to increase the accuracy by using high precision in the computations without degrading the performance. We emphasize that many of the highlights presented here have also been submitted to peer-reviewed journals or established conferences, and are under peer-review or have already been published.

More Details
Results 1–25 of 210
Results 1–25 of 210