Publications

  • library_booksThe Tensor Algebra Compiler  Fredrik Kjolstad, Shoaib Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe
    Abstract
    Tensor algebra is a powerful tool with applications in machine learning, data analytics, engineering and the physical sciences. Tensors are often sparse and compound operations must frequently be computed in a single kernel for performance and to save memory. Programmers are left to write kernels for every operation of interest, with different mixes of dense and sparse tensors in different formats. The combinations are infinite, which makes it impossible to manually implement and optimize them all. This paper introduces the first compiler technique to automatically generate kernels for any compound tensor algebra operation on dense and sparse tensors. The technique is implemented in a C++ library called taco. Its performance is competitive with best-in-class hand-optimized kernels in popular libraries, while supporting far more tensor operations.
    BibTex
    @article{Kjolstad:2017:TAC:3152284.3133901,
     author = {Kjolstad, Fredrik and Kamil, Shoaib and Chou, Stephen and Lugato, David and Amarasinghe, Saman},
     title = {The Tensor Algebra Compiler},
     journal = {Proc. ACM Program. Lang.},
     issue_date = {October 2017},
     volume = {1},
     number = {OOPSLA},
     month = oct,
     year = {2017},
     issn = {2475-1421},
     pages = {77:1--77:29},
     articleno = {77},
     numpages = {29},
     url = {http://doi.acm.org/10.1145/3133901},
     doi = {10.1145/3133901},
     acmid = {3133901},
     publisher = {ACM},
     address = {New York, NY, USA},
     keywords = {code generation, iteration graphs, linear algebra, merge lattices, parallelism, performance, sparse data structures, tensor algebra, tensors}
    }
  • library_booksUnified Sparse Formats for Tensor Algebra Compilers  Stephen Chou, Fredrik Kjolstad, and Saman Amarasinghe
    Abstract
    This paper shows how to build a sparse tensor algebra compiler that is agnostic to tensor formats (data layouts). We develop an interface that describes formats in terms of their capabilities and properties, and show how to build a modular code generator where new formats can be added as plugins. We then describe six implementations of the interface that compose to form the dense, CSR/CSF, COO, DIA, ELL, and HASH tensor formats and countless variants thereof. With these implementations at hand, our code generator can generate code for any tensor algebra expression on any combination of the aforementioned formats.

    To demonstrate our modular code generator design, we have implemented it in the open-source taco tensor algebra compiler. Our evaluation shows that we get better performance by supporting more formats specialized to different tensor structures, and our plugins makes it easy to add new formats. For example, when data is provided in the COO format, computing a single matrix-vector multiplication with COO is up to 3.6x faster than with CSR. Furthermore, DIA is specialized to tensor convolutions and stencil operations and therefore performs up to 22% faster than CSR for such operations. To further demonstrate the importance of support for many formats, we show that the best vector format for matrix-vector multiplication varies with input sparsities, from hash maps to sparse vectors to dense vectors. Finally, we show that the performance of generated code for these formats is competitive with hand-optimized implementations.
    BibTex
    @article{chou:2018:fmtaco,
     author = {Chou, Stephen and Kjolstad, Fredrik and Amarasinghe, Saman},
     title = {Unified Sparse Formats for Tensor Algebra Compilers},
     journal = {ArXiv e-prints},
     archivePrefix = "arXiv",
     eprint = {1804.10112},
     primaryClass = "cs.MS",
     keywords = {Computer Science - Mathematical Software, Computer Science - Programming Languages},
     year = 2018,
     month = apr,
     url = {https://arxiv.org/abs/1804.10112}
    }
  • library_booksSparse Tensor Algebra Optimizations with Workspaces  Fredrik Kjolstad, Peter Ahrens, Shoaib Kamil, and Saman Amarasinghe
    Abstract
    This paper shows how to optimize sparse tensor algebraic expressions by introducing temporary tensors, called workspaces, into the resulting loop nests. We develop a new intermediate language for tensor operations called concrete index notation that extends tensor index notation. Concrete index notation expresses when and where sub-computations occur and what tensor they are stored into. We then describe the workspace optimization in this language, and how to compile it to sparse code by building on prior work in the literature.

    We demonstrate the importance of the optimization on several important sparse tensor kernels, including sparse matrix-matrix multiplication (SpMM), sparse tensor addition (SpAdd), and the matricized tensor times Khatri-Rao product (MTTKRP) used to factorize tensors. Our results show improvements over prior work on tensor algebra compilation and brings the performance of these kernels on par with state-of-the-art hand-optimized implementations. For example, SpMM was not supported by prior tensor algebra compilers, the performance of MTTKRP on the nell-2 data set improves by 35%, and MTTKRP can for the first time have sparse results.
    BibTex
    @article{kjolstad:2018:workspaces,
     author = {Kjolstad, Fredrik and Ahrens, Peter and Kamil, Shoaib and Amarasinghe, Saman},
     title = {Sparse Tensor Algebra Optimizations with Workspaces},
     journal = {ArXiv e-prints},
     archivePrefix = "arXiv",
     eprint = {1802.10574},
     primaryClass = "cs.MS",
     keywords = {Computer Science - Mathematical Software, Computer Science - Programming Languages},
     year = 2018,
     month = apr,
     url = {https://arxiv.org/abs/1802.10574}
    }