Write a Blog >>

Sparse general matrix-matrix multiplication on GPUs is challenging due to the varying sparsity patterns of sparse matrices. Existing solutions achieve good performance for certain types of matrices, but fail to accelerate all kinds of matrices in the same manner. Our approach combines multiple strategies with dynamic parameter selection to dynamically choose and tune the best fitting algorithm for each row of the matrix. This choice is supported by a lightweight, multi-level matrix analysis, which carefully balances between analysis cost and expected performance gains. Our evaluation on thousands of matrices with various characteristics shows that we outperform all existing solutions in 85% of the cases and that we achieve the second best performance in 12%. Over the entire data set, our solution is on average twice as fast as the second best approach and up to 25$\times$ faster than other state-of-the-art GPU implementations. Using our approach, applications can expect the best performance independent of the matrices they work on.

Wed 26 Feb
Times are displayed in time zone: (GMT-07:00) Tijuana, Baja California change

PPoPP-2020-papers
11:20 - 12:35: Main Conference - Matrix Multiplication and Approximation (Mediterranean Ballroom)
Chair(s): Albert CohenGoogle
PPoPP-2020-papers11:20 - 11:45
Talk
Mathias PargerGraz University of Technology, Martin WinterGraz University of Technology, Austria, Daniel MlakarGraz University of Technology, Austria, Markus SteinbergerGraz University of Technology, Austria
PPoPP-2020-papers11:45 - 12:10
Talk
Peng JiangThe University of Iowa, Changwan HongThe Ohio State University, Gagan AgrawalThe Ohio State University
PPoPP-2020-papers12:10 - 12:35
Talk
Bangtian LiuUniversity of Toronto, Kazem CheshmiUniversity of Toronto, Saeed SooriUniversity of Toronto, Michelle StroutUniversity of Arizona, Maryam Mehri DehnaviUniversity of Toronto