Download PDFOpen PDF in browser

A Source-to-Source Format and Schedule Auto-Tuning Framework for Sparse Tensor Program

EasyChair Preprint 13973

4 pagesDate: July 15, 2024

Abstract

Sparse tensor compilers simplify the development and optimization of operators through high-level abstract representation and auto-tuning. However, existing work also relies on compilation and hardware knowledge to specify the design space for tuning, and also their search strategy is limited, which creates unavoidable cost and efficiency issues. In this paper, we propose a source-to-source auto-tuning framework that targets sparse format and schedule for sparse tensor program. The framework extracts sparse tensor computational patterns based on computational graphs to automatically generate design space, and designs a adaptive exploration scheme based on reinforcement learning and heuristic algorithm to find the optimal format and schedule configurations in it. Preliminary experiments show that we achieve significant performance gains compared to state-of-the-art high-performance arithmetic libraries, manual optimization schemes, and auto-tuning frameworks.

Keyphrases: Code Generation and Optimizations, Sparse Tensor Compiler, Sparse computation, auto-tuning

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:13973,
  author    = {Xiangjun Qu and Lei Gong and Wenqi Lou and Chao Wang and Xuehai Zhou},
  title     = {A Source-to-Source Format and Schedule Auto-Tuning Framework for Sparse Tensor Program},
  howpublished = {EasyChair Preprint 13973},
  year      = {EasyChair, 2024}}
Download PDFOpen PDF in browser