- Harry S. Truman Fellow
- Sandia National Laboratories – Livermore, CA

BDKS16-TR | |||||

Abstract: We propose a fine-grained hypergraph model for sparse matrix-matrix multiplication (SpGEMM), a key computational kernel in scientific computing and data analysis whose performance is often communication bound. This model correctly describes both the interprocessor communication volume along a critical path in a parallel computation and also the volume of data moving through the memory hierarchy in a sequential computation. We show that identifying a communication-optimal algorithm for particular input matrices is equivalent to solving a hypergraph partitioning problem. Our approach is sparsity dependent, meaning that we seek the best algorithm for the given input matrices. In addition to our (3D) fine-grained model, we also propose coarse-grained 1D and 2D models that correspond to simpler SpGEMM algorithms. We explore the relations between our models theoretically, and we study their performance experimentally in the context of three applications that use SpGEMM as a key computation. For each application, we find that at least one coarse-grained model is as communication efficient as the fine-grained model. We also observe that different applications have affinities for different algorithms. Our results demonstrate that hypergraphs are an accurate model for reasoning about the communication costs of SpGEMM as well as a practical tool for exploring the SpGEMM algorithm design space. |
|||||

BibTeX:
@techreport{BDKS16-TR, author = {Grey Ballard and Alex Druinsky and Nicholas Knight and Oded Schwartz}, title = {Hypergraph Partitioning for Sparse Matrix-Matrix Multiplication}, institution = {arXiv}, year = {2016}, number = {1603.05627}, url = {http://arxiv.org/abs/1603.05627} } |
|||||

KBP16 technical report}
| |||||

Abstract: Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that WH approximates A. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets.We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementations, our algorithm is also flexible: (1) it performs well for both dense and sparse matrices, and (2) it allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors W and H within the alternating iterations. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. |
|||||

BibTeX:
@inproceedings{KBP16, author = {Kannan, Ramakrishnan and Ballard, Grey and Park, Haesun}, title = {A High-performance Parallel Algorithm for Nonnegative Matrix Factorization}, booktitle = {Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming}, publisher = {ACM}, address = {New York, NY, USA}, month = {February}, year = {2016}, series = {PPoPP '16}, pages = {9:1--9:11}, url = {http://doi.acm.org/10.1145/2851141.2851152} } |
|||||

AB+15-TR | |||||

Abstract: Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. The scaling of existing parallel implementations of SpGEMM is heavily bound by communication. Even though 3D (or 2.5D) algorithms have been proposed and theoretically analyzed in the flat MPI model on Erdos-Renyi matrices, those algorithms had not been implemented in practice and their complexities had not been analyzed for the general case. In this work, we present the first ever implementation of the 3D SpGEMM formulation that also exploits multiple (intra-node and inter-node) levels of parallelism, achieving significant speedups over the state-of-the-art publicly available codes at all levels of concurrencies. We extensively evaluate our implementation and identify bottlenecks that should be subject to further research. |
|||||

BibTeX:
@techreport{AB+15-TR, author = {Ariful Azad and Grey Ballard and Aydin Buluc and James Demmel and Laura Grigori and Oded Schwartz and Sivan Toledo and Samuel Williams}, title = {Exploiting Multiple Levels of Parallelism in Sparse Matrix-Matrix Multiplication}, institution = {arXiv}, month = {October}, year = {2015}, number = {1510.00844}, url = {http://arxiv.org/abs/1510.00844} } |
|||||

ABK15-TR | |||||

Abstract: As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data, assuming double precision. By viewing the data as a dense five-way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 5000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed-memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments. |
|||||

BibTeX:
@techreport{ABK15-TR, author = {Woody Austin and Grey Ballard and Tamara G. Kolda}, title = {Parallel Tensor Compression for Large-Scale Scientific Data}, institution = {arXiv}, year = {2015}, number = {1510.06689}, note = {To appear in IPDPS}, url = {http://arxiv.org/abs/1510.06689} } |
|||||

BB15 technical report}
| |||||

Abstract: Matrix multiplication is a fundamental computation in many scientific disciplines. In this paper, we show that novel fast matrix multiplication algorithms can significantly outperform vendor implementations of the classical algorithm and Strassen's fast algorithm on modest problem sizes and shapes. Furthermore, we show that the best choice of fast algorithm depends not only on the size of the matrices but also the shape. We develop a code generation tool to automatically implement multiple sequential and shared-memory parallel variants of each fast algorithm, including our novel parallelization scheme. This allows us to rapidly benchmark over 20 fast algorithms on several problem sizes. Furthermore, we discuss a number of practical implementation issues for these algorithms on shared-memory machines that can direct further research on making fast algorithms practical. |
|||||

BibTeX:
@inproceedings{BB15, author = {Benson, Austin R. and Ballard, Grey}, title = {A Framework for Practical Parallel Fast Matrix Multiplication}, booktitle = {Proceedings of the 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming}, publisher = {ACM}, address = {New York, NY, USA}, month = {February}, year = {2015}, series = {PPoPP 2015}, pages = {42--53}, url = {http://doi.acm.org/10.1145/2688500.2688513} } |
|||||

BBDLS15-TR | |||||

Abstract: Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit.We argue in this paper that the numerical sacrifice of fast algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. Finally, we benchmark performance and demonstrate our improved numerical accuracy. |
|||||

BibTeX:
@techreport{BBDLS15-TR, author = {Grey Ballard and Austin R. Benson and Alex Druinsky and Benjamin Lipshitz and Oded Schwartz}, title = {Improving the numerical stability of fast matrix multiplication algorithms}, institution = {arXiv}, month = {July}, year = {2015}, url = {http://arxiv.org/abs/1507.00687} } |
|||||

BDGJ+15 | |||||

Abstract: The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation.We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstrate the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. Experiments on supercomputers demonstrate the benefits of the communication cost improvements: in particular, our experiments show substantial improvements over tuned library implementations for tall-and-skinny matrices. We also provide algorithmic improvements to the Householder QR and CAQR algorithms, and we investigate several alternatives to the Householder reconstruction algorithm that sacrifice guarantees on numerical stability in some cases in order to obtain higher performance. |
|||||

BibTeX:
@article{BDGJ+15, author = {Ballard, Grey and Demmel, James and Grigori, Laura and Knight, Nicholas and Jacquelin, Mathias and Nguyen, Hong Diep}, title = {Reconstructing {H}ouseholder Vectors from Tall-Skinny {QR}}, journal = {Journal of Parallel and Distributed Computing}, month = {August}, year = {2015}, volume = {85}, pages = {3-31}, url = {http://www.sciencedirect.com/science/article/pii/S074373151500101X} } |
|||||

BDK15 technical report}
| |||||

Abstract: The running time of an algorithm depends on both arithmetic and communication (i.e., data movement) costs, and the relative costs of communication are growing over time. In this work, we present sequential and distributed-memory parallel algorithms for tridiagonalizing full symmetric and symmetric band matrices that asymptotically reduce communication compared to previous approaches. The tridiagonalization of a symmetric band matrix is a key kernel in solving the symmetric eigenvalue problem for both full and band matrices. In order to preserve structure, tridiagonalization routines use annihilate-and-chase procedures that previously have suffered from poor data locality and high parallel latency cost. We improve both by reorganizing the computation and obtain asymptotic improvements. We also propose new algorithms for reducing a full symmetric matrix to band form in a communication-efficient manner. In this article, we consider the cases of computing eigenvalues only and of computing eigenvalues and all eigenvectors. |
|||||

BibTeX:
@article{BDK15, author = {Ballard, Grey and Demmel, James and Knight, Nicholas}, title = {Avoiding Communication in Successive Band Reduction}, journal = {ACM Transactions on Parallel Computing}, publisher = {ACM}, address = {New York, NY, USA}, month = {February}, year = {2015}, volume = {1}, number = {2}, pages = {11:1--11:37}, url = {http://doi.acm.org/10.1145/2686877} } |
|||||

BDKS15 | |||||

Abstract: The performance of parallel algorithms for sparse matrix-matrix multiplication is typically determined by the amount of interprocessor communication performed, which in turn depends on the nonzero structure of the input matrices. In this paper, we characterize the communication cost of a sparse matrix-matrix multiplication algorithm in terms of the size of a cut of an associated hypergraph that encodes the computation for a given input nonzero structure. Obtaining an optimal algorithm corresponds to solving a hypergraph partitioning problem. Our hypergraph model generalizes several existing models for sparse matrix-vector multiplication, and we can leverage hypergraph partitioners developed for that computation to improve application-specific algorithms for multiplying sparse matrices. |
|||||

BibTeX:
@inproceedings{BDKS15, author = {Ballard, Grey and Druinsky, Alex and Knight, Nicholas and Schwartz, Oded}, title = {Brief Announcement: Hypergraph Partitioning for Parallel Sparse Matrix-Matrix Multiplication}, booktitle = {Proceedings of the 27th ACM on Symposium on Parallelism in Algorithms and Architectures}, publisher = {ACM}, address = {New York, NY, USA}, month = {June}, year = {2015}, series = {SPAA '15}, pages = {86--88}, url = {http://doi.acm.org/10.1145/2755573.2755613} } |
|||||

BPKS15 technical report}
| |||||

Abstract: Given two sets of vectors A and B, our problem is to find the top-t dot products among all possible pairs where one vector is taken from each set. This is a fundamental mathematical problem that appears in numerous data applications involving similarity search, link prediction, and collaborative filtering. We propose a sampling-based approach that avoids direct computation of all dot products. We select diamonds (i.e., four-cycles) from the weighted tripartite representation of A and B. The probability of selecting a diamond corresponding to pair (i,j) is proportional to the square of the dot product between vectors i (from A) and j (from B), amplifying the focus on the largest-magnitude entries. Experimental results indicate that diamond sampling is orders of magnitude faster than direct computation and requires far fewer samples than any competing approach. We also apply diamond sampling to the special case of maximum inner product search, and get significantly better results than the state-of-the-art hashing methods. |
|||||

BibTeX:
@inproceedings{BPKS15, author = {Grey Ballard and Ali Pinar and Tamara G. Kolda and C. Seshadhri}, title = {Diamond Sampling for Approximate Maximum All-pairs Dot-product {(MAD)} Search}, booktitle = {15th IEEE International Conference on Data Mining}, publisher = {IEEE Computer Society}, month = {November}, year = {2015}, series = {ICDM '15}, pages = {11--20}, url = {http://dx.doi.org/10.1109/ICDM.2015.46} } |
|||||

BPKS15-TR published version}
| |||||

Abstract: Given two sets of vectors A and B, our problem is to find the top-t dot products among all possible pairs where one vector is taken from each set. This is a fundamental mathematical problem that appears in numerous data applications involving similarity search, link prediction, and collaborative filtering. We propose a sampling-based approach that avoids direct computation of all dot products. We select diamonds (i.e., four-cycles) from the weighted tripartite representation of A and B. The probability of selecting a diamond corresponding to pair (i,j) is proportional to the square of the dot product between vectors i (from A) and j (from B), amplifying the focus on the largest-magnitude entries. Experimental results indicate that diamond sampling is orders of magnitude faster than direct computation and requires far fewer samples than any competing approach. We also apply diamond sampling to the special case of maximum inner product search, and get significantly better results than the state-of-the-art hashing methods. |
|||||

BibTeX:
@techreport{BPKS15-TR, author = {Grey Ballard and Ali Pinar and Tamara G. Kolda and C. Seshadhri}, title = {Diamond Sampling for Approximate Maximum All-pairs Dot-product {(MAD)} Search}, institution = {arXiv}, month = {June}, year = {2015}, url = {http://arxiv.org/abs/1506.03872} } |
|||||

BSH15-TR | |||||

Abstract: We consider the sequence of sparse matrix-matrix multiplications performed during the setup phase of algebraic multigrid. In particular, we show that the most commonly used parallel algorithm is often not the most communication-efficient one for all of the matrix-matrix multiplications involved. By using an alternative algorithm, we show that the communication costs are reduced (in theory and practice), and we demonstrate the performance benefit for both model (structured) and more realistic unstructured problems on large-scale distributed-memory parallel systems. Our theoretical analysis shows that we can reduce communication by a factor of up to 5.4 for a model problem, and we observe in our empirical evaluation communication reductions of factors up to 4.7 for structured problems and 3.7 for unstructured problems. These reductions in communication translate to run-time speedups of up to factors of 2.3 and 2.5, respectively. |
|||||

BibTeX:
@techreport{BSH15-TR, author = {Ballard, Grey and Siefert, Christopher and Hu, Jonathan}, title = {Reducing Communication Costs for Sparse Matrix Multiplication within Algebraic Multigrid}, institution = {Sandia National Laboratories}, month = {September}, year = {2015}, number = {SAND2015-3275}, url = {http://prod.sandia.gov/techlib/access-control.cgi/2015/153275.pdf} } |
|||||

KBP15-TR published version}
| |||||

Abstract: Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A is approximated by WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithm to solve the problem for big datasets. Existing distributed-memory algorithms are limited in terms of performance and applicability, as they are implemented using Hadoop and are designed only for sparse matrices. We propose a distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for W and H. To our knowledge, our algorithm is the first high-performance parallel algorithm for NMF. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementations, our algorithm is also flexible: (1) it performs well for both dense and sparse matrices, and (2) it allows the user to choose from among multiple algorithms for solving local NLS subproblems within the alternating iterations. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. |
|||||

BibTeX:
@techreport{KBP15-TR, author = {Ramakrishnan Kannan and Grey Ballard and Haesun Park}, title = {A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization}, institution = {arXiv}, month = {September}, year = {2015}, number = {1509.09313}, url = {http://arxiv.org/abs/1509.09313} } |
|||||

BB14 | |||||

BibTeX:
@article{BB14, author = {Grey Ballard and John Baxley}, title = {Asymptotic Behavior of the Eigenvalues of Toeplitz Integral Operators Associated with the Hankel Transform}, journal = {Dynamic Systems and Applications}, month = {December}, year = {2014}, volume = {23}, number = {4}, pages = {505--530} } |
|||||

BB14-TR published version}
| |||||

Abstract: Matrix multiplication is a fundamental computation in many scientific disciplines. In this paper, we show that novel fast matrix multiplication algorithms can significantly outperform vendor implementations of the classical algorithm and Strassen's fast algorithm on modest problem sizes and shapes. Furthermore, we show that the best choice of fast algorithm depends not only on the size of the matrices but also the shape. We develop a code generation tool to automatically implement multiple sequential and shared-memory parallel variants of each fast algorithm, including our novel parallelization scheme. This allows us to rapidly benchmark over 20 fast algorithms on several problem sizes. Furthermore, we discuss a number of practical implementation issues for these algorithms on shared-memory machines that can direct further research on making fast algorithms practical. |
|||||

BibTeX:
@techreport{BB14-TR, author = {Benson, Austin and Ballard, Grey}, title = {A Framework for Practical Parallel Fast Matrix Multiplication}, institution = {arXiv}, month = {September}, year = {2014}, url = {http://arxiv.org/abs/1409.2908} } |
|||||

BBDD+14 technical report}
| |||||

Abstract: We describe and analyze a novel symmetric triangular factorization algorithm. The algorithm is essentially a block version of Aasen's triangular tridiagonalization. It factors a dense symmetric matrix $A$ as the product $A=PLTL^TP^T,$ where $P$ is a permutation matrix, $L$ is lower triangular, and $T$ is block tridiagonal and banded. The algorithm is the first symmetric-indefinite communication-avoiding factorization: it performs an asymptotically optimal amount of communication in a two-level memory hierarchy for almost any cache-line size. Adaptations of the algorithm to parallel computers are likely to be communication efficient as well; one such adaptation has been recently published. The current paper describes the algorithm, proves that it is numerically stable, and proves that it is communication optimal. |
|||||

BibTeX:
@article{BBDD+14, author = {Ballard, G. and Becker, D. and Demmel, J. and Dongarra, J. and Druinsky, A. and Peled, I. and Schwartz, O. and Toledo, S. and Yamazaki, I.}, title = {Communication-Avoiding Symmetric-Indefinite Factorization}, journal = {SIAM Journal on Matrix Analysis and Applications}, month = {November}, year = {2014}, volume = {35}, number = {4}, pages = {1364--1406}, url = {http://dx.doi.org/10.1137/130929060} } |
|||||

BCDH+14
| |||||

Abstract: The traditional metric for the efficiency of a numerical algorithm has been the number of arithmetic operations it performs. Technological trends have long been reducing the time to perform an arithmetic operation, so it is no longer the bottleneck in many algorithms; rather, communication, or moving data, is the bottleneck. This motivates us to seek algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. In this paper we summarize recent progress in three aspects of this problem. First we describe lower bounds on communication. Some of these generalize known lower bounds for dense classical (O(n^3)) matrix multiplication to all direct methods of linear algebra, to sequential and parallel algorithms, and to dense and sparse matrices. We also present lower bounds for Strassen-like algorithms, and for iterative methods, in particular Krylov subspace methods applied to sparse matrices. Second, we compare these lower bounds to widely used versions of these algorithms, and note that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identify or invent new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrate large speed-ups in theory and practice. |
|||||

BibTeX:
@article{BCDH+14, author = {Ballard, G. and Carson, E. and Demmel, J. and Hoemmen, M. and Knight, N. and Schwartz, O.}, title = {Communication lower bounds and optimal algorithms for numerical linear algebra}, journal = {Acta Numerica}, month = {May}, year = {2014}, volume = {23}, pages = {1--155}, url = {http://journals.cambridge.org/article_S0962492914000038} } |
|||||

BDGJ+14 technical report}
| |||||

Abstract: The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstrate the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. As a result, our final parallel QR algorithm outperforms ScaLAPACK and Elemental implementations of Householder QR and our implementation of CAQR on the Hopper Cray XE6 NERSC system. We also provide algorithmic improvements to the ScaLAPACK and CAQR algorithms. |
|||||

BibTeX:
@inproceedings{BDGJ+14, author = {Ballard, Grey and Demmel, James and Grigori, Laura and Jacquelin, Mathias and Nguyen, Hong Diep and Solomonik, Edgar}, title = {Reconstructing {H}ouseholder Vectors from Tall-Skinny {QR}}, booktitle = {IEEE 28th International Parallel and Distributed Processing Symposium}, month = {May}, year = {2014}, pages = {1159--1170} } |
|||||

BDHS14
| |||||

Abstract: Algorithms have historically been evaluated in terms of the number of arithmetic operations they performed. This analysis is no longer sufficient for predicting running times on today's machines. Moving data through memory hierarchies and among processors requires much more time (and energy) than performing computations. Hardware trends suggest that the relative costs of this communication will only increase. Proving lower bounds on the communication of algorithms and finding algorithms that attain these bounds are therefore fundamental goals. We show that the communication cost of an algorithm is closely related to the graph expansion properties of its corresponding computation graph. Matrix multiplication is one of the most fundamental problems in scientific computing and in parallel computing. Applying expansion analysis to Strassen's and other fast matrix multiplication algorithms, we obtain the first lower bounds on their communication costs. These bounds show that the current sequential algorithms are optimal but that previous parallel algorithms communicate more than necessary. Our new parallelization of Strassen's algorithm is communication-optimal and outperforms all previous matrix multiplication algorithms. |
|||||

BibTeX:
@article{BDHS14, author = {Ballard, Grey and Demmel, James and Holtz, Olga and Schwartz, Oded}, title = {Communication Costs of Strassen's Matrix Multiplication}, journal = {Communications of the ACM}, publisher = {ACM}, address = {New York, NY, USA}, month = {February}, year = {2014}, volume = {57}, number = {2}, pages = {107--114}, url = {http://doi.acm.org/10.1145/2556647.2556660} } |
|||||

Ballard13
| |||||

Abstract: Dense linear algebra computations are essential to nearly every problem in scientific computing and to countless other fields. Most matrix computations enjoy a high computational intensity (i.e., ratio of computation to data), and therefore the algorithms for the computations have a potential for high efficiency. However, performance for many linear algebra algorithms is limited by the cost of moving data between processors on a parallel computer or throughout the memory hierarchy of a single processor, which we will refer to generally as communication. Technological trends indicate that algorithmic performance will become even more limited by communication in the future. In this thesis, we consider the fundamental computations within dense linear algebra and address the following question: can we significantly improve the current algorithms for these computations, in terms of the communication they require and their performance in practice? To answer the question, we analyze algorithms on sequential and parallel architectural models that are simple enough to determine coarse communication costs but accurate enough to predict performance of implementations on real hardware. For most of the computations, we prove lower bounds on the communication that any algorithm must perform. If an algorithm exists with communication costs that match the lower bounds (at least in an asymptotic sense), we call the algorithm communication optimal. In many cases, the most commonly used algorithms are not communication optimal, and we can develop new algorithms that require less data movement and attain the communication lower bounds. In this thesis, we develop both new communication lower bounds and new algorithms, tightening (and in many cases closing) the gap between best known lower bound and best known algorithm (or upper bound). We consider both sequential and parallel algorithms, and we asses both classical and fast algorithms (e.g., Strassen's matrix multiplication algorithm). In particular, the central contributions of this thesis are: proving new communication lower bounds for nearly all classical direct linear algebra computations (dense or sparse), including factorizations for solving linear systems, least squares problems, and eigenvalue and singular value problems; proving new communication lower bounds for Strassen's and other fast matrix multiplication algorithms; proving new parallel communication lower bounds for classical and fast computations that set limits on an algorithm's ability to perfectly strong scale; summarizing the state-of-the-art in communication efficiency for both sequential and parallel algorithms for the computations to which the lower bounds apply; developing a new communication-optimal algorithm for computing a symmetric-indefinite factorization (observing speedups of up to 2.8x compared to alternative shared-memory parallel algorithms); developing new, more communication-efficient algorithms for reducing a symmetric band matrix to tridiagonal form via orthogonal similar transformations (observing speedups of 2--6x compared to alternative sequential and parallel algorithms); and developing a new communication-optimal parallelization of Strassen's matrix multiplication algorithm (observing speedups of up to 2.84x compared to alternative distributed-memory parallel algorithms). |
|||||

BibTeX:
@phdthesis{Ballard13, author = {Ballard, Grey}, title = {Avoiding Communication in Dense Linear Algebra}, school = {EECS Department, University of California, Berkeley}, month = {Aug}, year = {2013}, number = {UCB/EECS-2013-151}, url = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-151.html} } |
|||||

BBDD+13
| |||||

Abstract: Factorization of a dense symmetric indefinite matrix is a key computational kernel in many scientific and engineering simulations. However, there is no scalable factorization algorithm that takes advantage of the symmetry and guarantees numerical stability through pivoting at the same time. This is because such an algorithm exhibits many of the fundamental challenges in parallel programming like irregular data accesses and irregular task dependencies. In this paper, we address these challenges in a tiled implementation of a blocked Aasen's algorithm using a dynamic scheduler. To fully exploit the limited parallelism in this left-looking algorithm, we study several performance enhancing techniques; e.g., parallel reduction to update a panel, tall-skinny LU factorization algorithms to factorize the panel, and a parallel implementation of symmetric pivoting. Our performance results on up to 48 AMD Opteron processors demonstrate that our implementation obtains speedups of up to 2.8 over MKL, while losing only one or two digits in the computed residual norms. |
|||||

BibTeX:
@inproceedings{BBDD+13, author = {Ballard, G. and Becker, D. and Demmel, J. and Dongarra, J. and Druinsky, A. and Peled, I. and Schwartz, O. and Toledo, S. and Yamazaki, I.}, title = {Implementing a Blocked {Aasen's} Algorithm with a Dynamic Scheduler on Multicore Architectures}, booktitle = {Proceedings of the 27th IEEE International Parallel Distributed Processing Symposium}, month = {May}, year = {2013}, series = {IPDPS '13}, pages = {895--907}, url = {http://doi.ieeecomputersociety.org/10.1109/IPDPS.2013.98} } |
|||||

BBDD+13-TR newer version}
{published version}
| |||||

BibTeX:
@techreport{BBDD+13-TR, author = {Ballard, G. and Becker, D. and Demmel, J. and Dongarra, J. and Druinsky, A. and Peled, I. and Schwartz, O. and Toledo, S. and Yamazaki, I.}, title = {Communication-Avoiding Symmetric-Indefinite Factorization}, institution = {EECS Department, University of California, Berkeley}, month = {July}, year = {2013}, number = {UCB/EECS-2013-127}, url = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-127.html} } |
|||||

BBDG+13 | |||||

Abstract: Parallel algorithms for sparse matrix-matrix multiplication typically spend most of their time on inter-processor communication rather than on computation, and hardware trends predict the relative cost of communication will only increase. Thus, sparse matrix multiplication algorithms must minimize communication costs in order to scale to large processor counts. In this paper, we consider multiplying sparse matrices corresponding to Erdos-Renyi random graphs on distributed-memory parallel machines. We prove a new lower bound on the expected communication cost for a wide class of algorithms. Our analysis of existing algorithms shows that, while some are optimal for a limited range of matrix density and number of processors, none is optimal in general. We obtain two new parallel algorithms and prove that they match the expected communication cost lower bound, and hence they are optimal. |
|||||

BibTeX:
@inproceedings{BBDG+13, author = {Ballard, G. and Bulu\c{c}, A. and Demmel, J. and Grigori, L. and Lipshitz, B. and Schwartz, O. and Toledo, S.}, title = {Communication optimal parallel multiplication of sparse random matrices}, booktitle = {Proceedings of the 25th ACM Symposium on Parallelism in Algorithms and Architectures}, publisher = {ACM}, month = {June}, year = {2013}, series = {SPAA '13}, pages = {222--231}, url = {http://doi.acm.org/10.1145/2486159.2486196} } |
|||||

BDGJ+13-TR published version}
| |||||

Abstract: The Tall-Skinny QR (TSQR) algorithm is more communication efficient than the standard Householder algorithm for QR decomposition of matrices with many more rows than columns. However, TSQR produces a different representation of the orthogonal factor and therefore requires more software development to support the new representation. Further, implicitly applying the orthogonal factor to the trailing matrix in the context of factoring a square matrix is more complicated and costly than with the Householder representation. We show how to perform TSQR and then reconstruct the Householder vector representation with the same asymptotic communication efficiency and little extra computational cost. We demonstrate the high performance and numerical stability of this algorithm both theoretically and empirically. The new Householder reconstruction algorithm allows us to design more efficient parallel QR algorithms, with significantly lower latency cost compared to Householder QR and lower bandwidth and latency costs compared with Communication-Avoiding QR (CAQR) algorithm. As a result, our final parallel QR algorithm outperforms ScaLAPACK and Elemental implementations of Householder QR and our implementation of CAQR on the Hopper Cray XE6 NERSC system. |
|||||

BibTeX:
@techreport{BDGJ+13-TR, author = {Ballard, G. and Demmel, J. and Grigori, L. and Jacquelin, M. and Nguyen, H.D. and Solomonik, E.}, title = {Reconstructing {H}ouseholder Vectors from Tall-Skinny {QR}}, institution = {EECS Department, University of California, Berkeley}, month = {Oct}, year = {2013}, number = {UCB/EECS-2013-175}, url = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-175.html} } |
|||||

BDK13-TR published version}
| |||||

BibTeX:
@techreport{BDK13-TR, author = {Ballard, G. and Demmel, J. and Knight, N.}, title = {Avoiding Communication in Successive Band Reduction}, institution = {EECS Department, University of California, Berkeley}, month = {July}, year = {2013}, number = {UCB/EECS-2013-131}, url = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2013/EECS-2013-131.html} } |
|||||

BDLST13 | |||||

Abstract: High performance for numerical linear algebra often comes at the expense of stability. Computing the LU decomposition of a matrix via Gaussian Elimination can be organized so that the computation involves regular and efficient data access. However, maintaining numerical stability via partial pivoting involves row interchanges that lead to inefficient data access patterns. To optimize communication efficiency throughout the memory hierarchy we confront two seemingly contradictory requirements: partial pivoting is efficient with column-major layout, whereas a block-recursive layout is optimal for the rest of the computation. We resolve this by introducing a shape morphing procedure that dynamically matches the layout to the computation throughout the algorithm, and show that Gaussian Elimination with partial pivoting can be performed in a communication efficient and cache-oblivious way. Our technique extends to QR decomposition, where computing Householder vectors prefers a different data layout than the rest of the computation. |
|||||

BibTeX:
@inproceedings{BDLST13, author = {Ballard, G. and Demmel, J. and Lipshitz, B. and Schwartz, O. and Toledo, S.}, title = {Communication efficient {Gaussian} elimination with partial pivoting using a shape morphing data layout}, booktitle = {Proceedings of the 25th ACM Symposium on Parallelism in Algorithms and Architectures}, publisher = {ACM}, month = {June}, year = {2013}, series = {SPAA '13}, pages = {232--240}, url = {http://doi.acm.org/10.1145/2486159.2486198} } |
|||||

BDHLS12-CAPS | |||||

Abstract: Parallel matrix multiplication is one of the most studied fundamental problems in distributed and high performance computing. We obtain a new parallel algorithm that is based on Strassen's fast matrix multiplication and minimizes communication. The algorithm outperforms all known parallel matrix multiplication algorithms, classical and Strassen-based, both asymptotically and in practice. A critical bottleneck in parallelizing Strassen's algorithm is the communication between the processors. Ballard, Demmel, Holtz, and Schwartz (SPAA '11) prove lower bounds on these communication costs, using expansion properties of the underlying computation graph. Our algorithm matches these lower bounds, and so is communication-optimal. It exhibits perfect strong scaling within the maximum possible range. Benchmarking our implementation on a Cray XT4, we obtain speedups over classical and Strassen-based algorithms ranging from 24% to 184% for a fixed matrix dimension n=94080, where the number of processors ranges from 49 to 7203. Our parallelization approach generalizes to other fast matrix multiplication algorithms. |
|||||

BibTeX:
@inproceedings{BDHLS12-CAPS, author = {G. Ballard and J. Demmel and O. Holtz and B. Lipshitz and O. Schwartz}, title = {Communication-optimal parallel algorithm for {S}trassen's matrix multiplication}, booktitle = {{Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures}}, publisher = {ACM}, address = {New York, NY, USA}, month = {June}, year = {2012}, series = {SPAA '12}, pages = {193--204}, url = {http://doi.acm.org/10.1145/2312005.2312044} } |
|||||

BDHLS12-RECT | |||||

Abstract: Graph expansion analysis of computational DAGs is useful for obtaining communication cost lower bounds where previous methods, such as geometric embedding, are not applicable. This has recently been demonstrated for Strassentextquoterights and Strassen-like fast square matrix multiplication algorithms. Here we extend the expansion analysis approach to fast algorithms for rectangular matrix multiplication, obtaining a new class of communication cost lower bounds. These apply, for example to the algorithms of Bini et al. (1979) and the algorithms of Hopcroft and Kerr (1971). Some of our bounds are proved to be optimal. |
|||||

BibTeX:
@incollection{BDHLS12-RECT, author = {Ballard, G. and Demmel, J. and Holtz, O. and Lipshitz, B. and Schwartz, O.}, title = {Graph Expansion Analysis for Communication Costs of Fast Rectangular Matrix Multiplication}, booktitle = {Design and Analysis of Algorithms}, editor = {Even, G. and Rawitz, D.}, publisher = {Springer Berlin Heidelberg}, month = {December}, year = {2012}, series = {Lecture Notes in Computer Science}, volume = {7659}, pages = {13--36}, url = {http://dx.doi.org/10.1007/978-3-642-34862-4_2} } |
|||||

BDHLS12-SS technical report}
| |||||

Abstract: A parallel algorithm has perfect strong scaling if its running time on $P$ processors is linear in $1/P$, including all communication costs. Distributed-memory parallel algorithms for matrix multiplication with perfect strong scaling have only recently been found. One is based on classical matrix multiplication (Solomonik and Demmel, 2011), and one is based on Strassen's fast matrix multiplication (Ballard, Demmel, Holtz, Lipshitz, and Schwartz, 2012). Both algorithms scale perfectly, but only up to some number of processors where the inter-processor communication no longer scales. We obtain a memory-independent communication cost lower bound on classical and Strassen-based distributed-memory matrix multiplication algorithms. These bounds imply that no classical or Strassen-based parallel matrix multiplication algorithm can strongly scale perfectly beyond the ranges already attained by the two parallel algorithms mentioned above. The memory-independent bounds and the strong scaling bounds generalize to other algorithms. |
|||||

BibTeX:
@inproceedings{BDHLS12-SS, author = {G. Ballard and J. Demmel and O. Holtz and B. Lipshitz and O. Schwartz}, title = {Brief announcement: strong scaling of matrix multiplication algorithms and memory-independent communication lower bounds}, booktitle = {{Proceedings of the 24th ACM Symposium on Parallelism in Algorithms and Architectures}}, publisher = {ACM}, address = {New York, NY, USA}, month = {June}, year = {2012}, series = {SPAA '12}, pages = {77--79}, url = {http://doi.acm.org/10.1145/2312005.2312021} } |
|||||

BDHLS12-SS-TR published version}
| |||||

BibTeX:
@techreport{BDHLS12-SS-TR, author = {Ballard, G. and Demmel, J. and Holtz, O. and Lipshitz, B. and Schwartz, O.}, title = {Strong Scaling of Matrix Multiplication Algorithms and Memory-Independent Communication Lower Bounds}, institution = {EECS Department, University of California, Berkeley}, month = {March}, year = {2012}, number = {UCB/EECS-2012-31}, url = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-31.html} } |
|||||

BDHS12 older version}
| |||||

Abstract: The communication cost of algorithms (also known as I/O-complexity) is shown to be closely related to the expansion properties of the corresponding computation graphs. We demonstrate this on Strassen's and other fast matrix multiplication algorithms, and obtain the first lower bounds on their communication costs. In the sequential case, where the processor has a fast memory of size $M$, too small to store three $n$-by-$n$ matrices, the lower bound on the number of words moved between fast and slow memory is, for a large class of matrix multiplication algorithms, $ (n/sqrt M)^0 ot M)$, where $0$ is the exponent in the arithmetic count (e.g., $0 = lg 7$ for Strassen, and $0 = 3$ for conventional matrix multiplication). With $p$ parallel processors, each with fast memory of size $M$, the lower bound is asymptotically lower by a factor of $p$. These bounds are attainable both for sequential and for parallel algorithms and hence optimal. |
|||||

BibTeX:
@article{BDHS12, author = {G. Ballard and J. Demmel and O. Holtz and O. Schwartz}, title = {Graph expansion and communication costs of fast matrix multiplication}, journal = {Journal of the ACM}, publisher = {ACM}, address = {New York, NY, USA}, month = {December}, year = {2012}, volume = {59}, number = {6}, pages = {32:1--32:23}, url = {http://doi.acm.org/10.1145/2395116.2395121} } |
|||||

BDHS12-FastLA | |||||

BibTeX:
@techreport{BDHS12-FastLA, author = {G. Ballard and J. Demmel and O. Holtz and O. Schwartz}, title = {Sequential Communication Bounds for Fast Linear Algebra}, institution = {UC Berkeley}, month = {March}, year = {2012}, number = {UCB/EECS-2012-36}, url = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2012/EECS-2012-36.html} } |
|||||

BDK12 | |||||

Abstract: The running time of an algorithm depends on both arithmetic and communication (i.e., data movement) costs, and the relative costs of communication are growing over time. In this work, we present both theoretical and practical results for tridiagonalizing a symmetric band matrix: we present an algorithm that asymptotically reduces communication, and we show that it indeed performs well in practice. The tridiagonalization of a symmetric band matrix is a key kernel in solving the symmetric eigenvalue problem for both full and band matrices. In order to preserve sparsity, tridiagonalization routines use annihilate-and-chase procedures that previously have suffered from poor data locality. We improve data locality by reorganizing the computation, asymptotically reducing communication costs compared to existing algorithms. Our sequential implementation demonstrates that avoiding communication improves runtime even at the expense of extra arithmetic: we observe a 2x speedup over Intel MKL while doing 43% more floating point operations. Our parallel implementation targets shared-memory multicore platforms. It uses pipelined parallelism and a static scheduler while retaining the locality properties of the sequential algorithm. Due to lightweight synchronization and effective data reuse, we see 9.5x scaling over our serial code and up to 6x speedup over the PLASMA library, comparing parallel performance on a ten-core processor. |
|||||

BibTeX:
@inproceedings{BDK12, author = {G. Ballard and J. Demmel and N. Knight}, title = {Communication avoiding successive band reduction}, booktitle = {Proceedings of the 17th ACM SIGPLAN symposium on Principles and Practice of Parallel Programming}, publisher = {ACM}, address = {New York, NY, USA}, month = {February}, year = {2012}, series = {PPoPP '12}, pages = {35--44}, url = {http://doi.acm.org/10.1145/2145816.2145822} } |
|||||

LBDS12 | |||||

Abstract: Matrix multiplication is a fundamental kernel of many high performance and scientific computing applications. Most parallel implementations use classical O(n3) matrix multiplication, even though there exist algorithms with lower arithmetic complexity. We recently presented a new Communication-Avoiding Parallel Strassen algorithm (CAPS), based on Strassen's fast matrix multiplication, that minimizes communication (SPAA '12). It communicates asymptotically less than all classical and all previous Strassen-based algorithms, and it attains theoretical lower bounds. In this paper we show that CAPS is also faster in practice. We benchmark and compare its performance to previous algorithms on Hopper (Cray XE6), Intrepid (IBM BG/P), and Franklin (Cray XT4). We demonstrate significant speedups over previous algorithms both for large matrices and for small matrices on large numbers of processors. We model and analyze the performance of CAPS and predict its performance on future exascale platforms. |
|||||

BibTeX:
@inproceedings{LBDS12, author = {Lipshitz, B. and Ballard, G. and Demmel, J. and Schwartz, O.}, title = {Communication-avoiding parallel {S}trassen: {I}mplementation and performance}, booktitle = {Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis}, month = {November}, year = {2012}, series = {SC '12}, pages = {101:1--101:11}, url = {http://dl.acm.org/citation.cfm?id=2388996.2389133} } |
|||||

ABDK11 | |||||

Abstract: We describe an implementation of the Communication-Avoiding QR (CAQR) factorization that runs entirely on a single graphics processor (GPU). We show that the reduction in memory traffic provided by CAQR allows us to outperform existing parallel GPU implementations of QR for a large class of tall-skinny matrices. Other GPU implementations of QR handle panel factorizations by either sending the work to a general-purpose processor or using entirely bandwidth-bound operations, incurring data transfer overheads. In contrast, our QR is done entirely on the GPU using compute-bound kernels, meaning performance is good regardless of the width of the matrix. As a result, we outperform CULA, a parallel linear algebra library for GPUs by up to 17x for tall-skinny matrices and Intel's Math Kernel Library (MKL) by up to 12x. We also discuss stationary video background subtraction as a motivating application. We apply a recent statistical approach, which requires many iterations of computing the singular value decomposition of a tall-skinny matrix. Using CAQR as a first step to getting the singular value decomposition, we are able to get the answer 3x faster than if we use a traditional bandwidth-bound GPU QR factorization tuned specifically for that matrix size, and 30x faster than if we use Intel's Math Kernel Library (MKL) singular value decomposition routine on a multicore CPU. |
|||||

BibTeX:
@inproceedings{ABDK11, author = {Anderson, M. and Ballard, G. and Demmel, J. and Keutzer, K.}, title = {Communication-Avoiding {QR} Decomposition for {GPU}s}, booktitle = {Proceedings of the 2011 IEEE International Parallel \& Distributed Processing Symposium}, publisher = {IEEE Computer Society}, address = {Washington, DC, USA}, month = {May}, year = {2011}, series = {{IPDPS} '11}, pages = {48--58}, url = {http://dx.doi.org/10.1109/IPDPS.2011.15} } |
|||||

BB11 | |||||

Abstract: Motivated by a problem which arises in the analysis of stagnation point flow toward a stretching sheet, we consider a general class of problems of the form $y'''=f(y,y',y'')$, $y(0)=a$, $y'(0)=m$, $trightarrowinfty y'(t) = b$. We give conditions on $f$ which imply existence of at least one solution and obtain a partial uniqueness result. |
|||||

BibTeX:
@article{BB11, author = {Baxley, J. and Ballard, G.}, title = {Existence of Solutions for a Class of Singular Nonlinear Third Order Autonomous Boundary Value Problems}, journal = {Communications in Applied Analysis}, month = {Oct}, year = {2011}, volume = {15}, number = {2, 3, and 4}, pages = {195--202}, url = {http://www.dynamicpublishers.com/CAA/caa2011.htm} } |
|||||

BDD11 | |||||

BibTeX:
@techreport{BDD11, author = {Ballard, G. and Demmel, J. and Dumitriu, I.}, title = {Communication-optimal parallel and sequential eigenvalue and singular value algorithms}, institution = {UC Berkeley}, month = {February}, year = {2011}, number = {EECS-2011-14}, url = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2011/EECS-2011-14.html} } |
|||||

BDG11 | |||||

Abstract: As the gap between the cost of communication (i.e., data movement) and computation continues to grow, the importance of pursuing algorithms which minimize communication also increases. Toward this end, we seek asymptotic communication lower bounds for general memory models and classes of algorithms. Recent work has established lower bounds for a wide set of linear algebra algorithms on a sequential machine and on a parallel machine with identical processors. This work extends these previous bounds to a heterogeneous model in which processors access data and perform floating point operations at differing speeds. We also present an algorithm for dense matrix multiplication which attains the lower bound. |
|||||

BibTeX:
@inproceedings{BDG11, author = {Ballard, G. and Demmel, J. and Gearhart, A.}, title = {Brief announcement: communication bounds for heterogeneous architectures}, booktitle = {Proceedings of the 23rd ACM Symposium on Parallelism in Algorithms and Architectures}, publisher = {ACM}, month = {June}, year = {2011}, series = {SPAA '11}, pages = {257--258}, url = {http://doi.acm.org/10.1145/1989493.1989531} } |
|||||

BDHS11-GLB
| |||||

Abstract: In 1981 Hong and Kung proved a lower bound on the amount of communication (amount of data moved between a small, fast memory and large, slow memory) needed to perform dense, n-by-n matrix multiplication using the conventional $O(n^3)$ algorithm, where the input matrices were too large to fit in the small, fast memory. In 2004 Irony, Toledo, and Tiskin gave a new proof of this result and extended it to the parallel case (where communication means the amount of data moved between processors). In both cases the lower bound may be expressed as $#arithmetic_operations/ sqrt M)$, where $M$ is the size of the fast memory (or local memory in the parallel case). Here we generalize these results to a much wider variety of algorithms, including LU factorization, Cholesky factorization, LDLT factorization, QR factorization, the Gram--Schmidt algorithm, and algorithms for eigenvalues and singular values, i.e., essentially all direct methods of linear algebra. The proof works for dense or sparse matrices and for sequential or parallel algorithms. In addition to lower bounds on the amount of data moved (bandwidth cost), we get lower bounds on the number of messages required to move it (latency cost). We extend our lower bound technique to compositions of linear algebra operations (like computing powers of a matrix) to decide whether it is enough to call a sequence of simpler optimal algorithms (like matrix multiplication) to minimize communication, or whether we can do better. We give examples of both. We also show how to extend our lower bounds to certain graph-theoretic problems. We point out recently designed algorithms that attain many of these lower bounds. |
|||||

BibTeX:
@article{BDHS11-GLB, author = {G. Ballard and J. Demmel and O. Holtz and O. Schwartz}, title = {Minimizing Communication in Numerical Linear Algebra}, journal = {SIAM Journal on Matrix Analysis and Applications}, publisher = {SIAM}, month = {September}, year = {2011}, volume = {32}, number = {3}, pages = {866--901}, url = {http://epubs.siam.org/doi/abs/10.1137/090769156} } |
|||||

BDHS11-SLB newer version}
| |||||

Abstract: The communication cost of algorithms (also known as I/O-complexity) is shown to be closely related to the expansion properties of the corresponding computation graphs. We demonstrate this on Strassen's and other fast matrix multiplication algorithms, and obtain the first lower bounds on their communication costs. For sequential algorithms these bounds are attainable and so optimal. |
|||||

BibTeX:
@inproceedings{BDHS11-SLB, author = {G. Ballard and J. Demmel and O. Holtz and O. Schwartz}, title = {Graph expansion and communication costs of fast matrix multiplication}, booktitle = {{Proceedings of the 23rd ACM Symposium on Parallelism in Algorithms and Architectures}}, publisher = {ACM}, month = {June}, year = {2011}, series = {SPAA '11}, pages = {1--12}, url = {http://doi.acm.org/10.1145/1989493.1989495} } |
|||||

BDHS11-TR | |||||

BibTeX:
@techreport{BDHS11-TR, author = {G. Ballard and J. Demmel and O. Holtz and O. Schwartz}, title = {Minimizing communication in linear algebra}, institution = {UC Berkeley}, month = {February}, year = {2011}, number = {EECS-2011-15}, url = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2011/EECS-2011-15.html} } |
|||||

BKP11 technical report}
| |||||

Abstract: The tensor eigenproblem has many important applications, generating both mathematical and application-specific interest in the properties of tensor eigenpairs and methods for computing them. A tensor is an $m$-way array, generalizing the concept of a matrix (a 2-way array). Kolda and Mayo have recently introduced a generalization of the matrix power method for computing real-valued tensor eigenpairs of symmetric tensors. In this work, we present an efficient implementation of their algorithm, exploiting symmetry in order to save storage, data movement, and computation. For an application involving repeatedly solving the tensor eigenproblem for many small tensors, we describe how a GPU can be used to accelerate the computations. On an NVIDIA Tesla C 2050 (Fermi) GPU, we achieve 318 Gflops/s (31% of theoretical peak performance in single precision) on our test data set. |
|||||

BibTeX:
@article{BKP11, author = {G. Ballard and T. Kolda and T. Plantenga}, title = {Efficiently Computing Tensor Eigenvalues on a {GPU}}, journal = {2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and PhD Forum}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, month = {May}, year = {2011}, volume = {0}, pages = {1340--1348}, url = {http://www.computer.org/portal/web/csdl/doi/10.1109/IPDPS.2011.287} } |
|||||

BDHS10 older version}
| |||||

Abstract: Numerical algorithms have two kinds of costs: arithmetic and communication, by which we mean either moving data between levels of a memory hierarchy (in the sequential case) or over a network connecting processors (in the parallel case). Communication costs often dominate arithmetic costs, so it is of interest to design algorithms minimizing communication. In this paper we first extend known lower bounds on the communication cost (both for bandwidth and for latency) of conventional ($O(n^3)$) matrix multiplication to Cholesky factorization, which is used for solving dense symmetric positive definite linear systems. Second, we compare the costs of various Cholesky decomposition implementations to these lower bounds and identify the algorithms and data structures that attain them. In the sequential case, we consider both the two-level and hierarchical memory models. Combined with prior results in [J. Demmel et al., Communication-optimal Parallel and Sequential QR and LU Factorizations, Technical report EECS-2008-89, University of California, Berkeley, CA, 2008], [J. Demmel et al., Implementing Communication-optimal Parallel and Sequential QR and LU Factorizations, SIAM. J. Sci. Comp., submitted], and [J. Demmel, L. Grigori, and H. Xiang, Communication-avoiding Gaussian Elimination, Proceedings of the 2008 ACM/IEEE Conference on Supercomputing, 2008] this gives a set of communication-optimal algorithms for $O(n^3)$ implementations of the three basic factorizations of dense linear algebra: LU with pivoting, QR, and Cholesky. But it goes beyond this prior work on sequential LU by optimizing communication for any number of levels of memory hierarchy. |
|||||

BibTeX:
@article{BDHS10, author = {G. Ballard and J. Demmel and O. Holtz and O. Schwartz}, title = {Communication-optimal Parallel and Sequential {C}holesky Decomposition}, journal = {SIAM Journal on Scientific Computing}, publisher = {SIAM}, month = {December}, year = {2010}, volume = {32}, number = {6}, pages = {3495--3523}, url = {http://epubs.siam.org/doi/abs/10.1137/090760969} } |
|||||

BKP10-TR published version}
| |||||

BibTeX:
@techreport{BKP10-TR, author = {Ballard, G. and Kolda, T. and Plantenga, T.}, title = {Efficiently Computing Tensor Eigenvalues on a {GPU}}, institution = {Sandia National Laboratories}, month = {December}, year = {2010}, url = {http://csri.sandia.gov/Proceedings/} } |
|||||

BB09 | |||||

Abstract: We study the Friedrichs extension for a class of 2nth order ordinary differential operators. These self-adjoint operators have compact inverses and the central problem is to describe their domains in terms of boundary conditions. |
|||||

BibTeX:
@article{BB09, author = {Ballard, G. and Baxley, J.}, title = {The Friedrichs extension of certain singular differential operators, II}, journal = {Electronic Journal of Qualitative Theory of Differential Equations}, month = {Oct}, year = {2009}, volume = {Special Edition I}, number = {5}, pages = {1--11}, url = {http://www.emis.ams.org/journals/EJQTDE/sped1/105.html} } |
|||||

BDHS09 newer version}
| |||||

Abstract: Numerical algorithms have two kinds of costs: arithmetic and communication, by which we mean either moving data between levels of a memory hierarchy (in the sequential case) or over a network connecting processors (in the parallel case). Communication costs often dominate arithmetic costs, so it is of interest to design algorithms minimizing communication. In this paper we first extend known lower bounds on the communication cost (both for bandwidth and for latency) of conventional ($O(n^3)$) matrix multiplication to Cholesky, which is used for solving dense symmetric positive definite linear systems. Second, we compare the cost of various Cholesky implementations to this lower bound, and draw the following conclusions: (1) "Nave" sequential algorithms for Cholesky attain neither the bandwidth nor latency lower bounds. (2) The sequential blocked algorithm in LAPACK (with the right block size), as well as various recursive algorithms [AP00, GJ01, AGW01, ST04], and one based on work of Toledo [Tol97], can attain the bandwidth lower bound. (3) The LAPACK algorithm can also attain the latency bound if used with blocked data structures rather than column-wise or row-wise matrix data structures, though the Toledo algorithm cannot. (4) The recursive sequential algorithm due to [AP00] attains the bandwidth and latency lower bounds at every level of a multi-level memory hierarchy, in a "cache-oblivious" way. (5) The parallel implementation of Cholesky in the ScaLAPACK library (again with the right block-size) attains both the bandwidth and latency lower bounds to within a poly-logarithmic factor. Combined with prior results in [DGHL08a, DGHL08b, DGX08] this gives a complete set of communication-optimal algorithms for $O(n^3)$ implementations of three basic factorizations of dense linear algebra: LU with pivoting, QR and Cholesky. But it goes beyond this prior work on sequential LU and QR by optimizing communication for any number of levels of memory hierarchy. |
|||||

BibTeX:
@inproceedings{BDHS09, author = {Ballard, G. and Demmel, J. and Holtz, O. and Schwartz, O.}, title = {Communication-optimal parallel and sequential {C}holesky decomposition}, booktitle = {{Proceedings of the 22nd Symposium on Parallelism in Algorithms and Architectures}}, publisher = {ACM}, month = {June}, year = {2009}, series = {SPAA '09}, pages = {245--252}, url = {http://doi.acm.org/10.1145/1583991.1584054} } |
|||||

Ballard08 | |||||

BibTeX:
@mastersthesis{Ballard08, author = {Ballard, G.}, title = {Asymptotic behavior of the eigenvalues of Toeplitz integral operators associated with the Hankel transform}, school = {Wake Forest University}, month = {May}, year = {2008}, url = {http://wakespace.lib.wfu.edu/handle/10339/14676} } |
|||||

BB08 | |||||

Abstract: We consider boundary value problems of the form $y''=-f(t,y)$, $y(0)=0$, $y(1)=0$, motivated by examples where $f(t,y)=t)g(y)$ and $g(y)$ behave like $y^-lambda$ ($0$) as $yrightarrow 0^+$. We explore conditions under which such problems have multiple positive solutions, investigate qualitative behavior of these solutions, and discuss computational methods for approximating the solutions. |
|||||

BibTeX:
@article{BB08, author = {Ballard, G. and Baxley, J.}, title = {Qualitative behavior and computation of multiple solutions of singular nonlinear boundary value problems}, journal = {Involve}, month = {Feb}, year = {2008}, volume = {1}, number = {1}, pages = {21--31}, url = {http://msp.org/involve/2008/1-1/p03.xhtml} } |
|||||

Ballard06 | |||||

Abstract: In an interdisciplinary effort to model protein dependency networks, biologists measure signals from certain proteins within cells over a given interval of time. Using this time series data, the goal is to deduce protein dependency relationships. The mathematical challenge is to statistically measure correlations between given proteins over time in order to conjecture probable relationships. Biologists can then consider these relationships with more scrutiny, in order to confirm their conjectures. One algorithm for finding such relationships makes use of interpolation of the data to produce next-state functions for each protein and the Deegan-Packel Index of Power voting method to measure the strength of correlations between pairs of proteins. The algorithm was previously implemented, but limitations associated with the original language required the algorithm to be re-implemented in a more computationally efficient language. Because of the algebraic focus of the Computational Commutative Algebra language, or CoCoA, the algorithm was re-implemented in this language, and results have been produced much more efficiently. In this paper I discuss the algorithm, the CoCoA language, the implementation of the algorithm in CoCoA, and the quality of the results. |
|||||

BibTeX:
@article{Ballard06, author = {Ballard, G.}, title = {Modeling Protein Dependency Networks Using CoCoA}, journal = {Crossroads}, publisher = {ACM}, address = {New York, NY, USA}, month = {September}, year = {2006}, volume = {13}, number = {1}, pages = {6--6}, url = {http://doi.acm.org/10.1145/1217666.1217672} } |
|||||

BBN06 | |||||

Abstract: We consider nonlinear boundary value problems with multiple solutions. A method is proposed for the computation of such solutions which depends crucially on known a priori qualitative information about the behavior of the solutions. The method is a two-stage method where the second stage is a shooting method and initial values of the shooting parameters are found in a first stage which approximates the boundary value problem with a discrete approximation. Both nonsingular and singular problems are considered. |
|||||

BibTeX:
@article{BBN06, author = {Ballard, G. and Baxley, J. and Libbus, N.}, title = {Qualitative behavior and computation of multiple solutions of nonlinear boundary value problems}, journal = {Communications on Pure and Applied Analysis}, month = {Mar}, year = {2006}, volume = {5}, number = {2}, pages = {251--259}, url = {http://www.aimsciences.org/journals/displayArticles.jsp?paperID=1769} } |

Created by JabRef on 18/03/2016.