Filters








5 Hits in 1.7 sec

NeuroVectorizer: End-to-End Vectorization with Deep Reinforcement Learning [article]

Ameer Haj-Ali, Nesreen K. Ahmed, Ted Willke, Sophia Shao, Krste Asanovic, Ion Stoica
2020 arXiv   pre-print
In this work, we explore a novel approach for handling loop vectorization and propose an end-to-end solution using deep reinforcement learning (RL).  ...  We develop an end-to-end framework, from code to vectorization, that integrates deep RL in the LLVM compiler. Our proposed framework takes benchmark codes as input and extracts the loop codes.  ...  Acknowledgments The authors would like to thank Ronny Ronen, Ayal Zaks, Gadi Haber, Hideki Saito, Pankaj Chawla, Andrew Kaylor and anonymous reviewers for their insightful feedback and suggestions.  ... 
arXiv:1909.13639v4 fatcat:pl3mcmsmxbaizgimjxekt5qstu

AutoPhase: Juggling HLS Phase Orderings in Random Forests with Deep Reinforcement Learning [article]

Qijing Huang, Ameer Haj-Ali, William Moses, John Xiang, Ion Stoica, Krste Asanovic, John Wawrzynek
2020 arXiv   pre-print
To this end, we implement AutoPhase: a framework that takes a program and uses deep reinforcement learning to find a sequence of compilation passes that minimizes its execution time.  ...  In this paper, we evaluate a new technique to address the phase-ordering problem: deep reinforcement learning.  ...  To this end, we aim to leverage recent advancements in deep reinforcement learning (RL) (Sutton & Barto, 1998; Haj-Ali et al., 2019b) to address the phase ordering problem.  ... 
arXiv:2003.00671v2 fatcat:xemglojhkfhllo7oeo4aosqala

Generating GPU Compiler Heuristics using Reinforcement Learning [article]

Ian Colbert, Jake Daly, Norm Rubin
2021 arXiv   pre-print
In this paper, we developed a GPU compiler autotuning framework that uses off-policy deep reinforcement learning to generate heuristics that improve the frame rates of graphics applications.  ...  We show that our machine learning-based compiler autotuning framework matches or surpasses the frame rates for 98% of graphics benchmarks with an average uplift of 1.6% up to 15.8%.  ...  Acknowledgements We would like to thank Mike Bedy, Robert Gottlieb, Chris Reeve, Andrew Dupont, Karen Dintino, Peter Scannell and the rest of the AMD GPU compiler team for insightful discussions and infrastructure  ... 
arXiv:2111.12055v1 fatcat:2v3ekxzzfncr5cz6yuunslqspi

Ansor : Generating High-Performance Tensor Programs for Deep Learning [article]

Lianmin Zheng, Chengfan Jia, Minmin Sun, Zhao Wu, Cody Hao Yu, Ameer Haj-Ali, Yida Wang, Jun Yang, Danyang Zhuo, Koushik Sen, Joseph E. Gonzalez, Ion Stoica
2020 arXiv   pre-print
Ansor then fine-tunes the sampled programs with evolutionary search and a learned cost model to identify the best programs.  ...  Currently, deep learning systems rely on vendor-provided kernel libraries or various search strategies to get performant tensor programs.  ...  Acknowledgement We would like to thank Weizhao Xian, Tianqi Chen, Frank Luan, anonymous reviewers, and our shepherd, Derek Murray, for their insightful feedback.  ... 
arXiv:2006.06762v4 fatcat:as6rrj2bvjcwtmkjremrrfkqhq

CompilerGym: Robust, Performant Compiler Optimization Environments for AI Research [article]

Chris Cummins, Bram Wasti, Jiadong Guo, Brandon Cui, Jason Ansel, Sahir Gomez, Somya Jain, Jia Liu, Olivier Teytaud, Benoit Steiner, Yuandong Tian, Hugh Leather
2021 arXiv   pre-print
CompilerGym enables anyone to experiment on production compiler optimization problems through an easy-to-use package, regardless of their experience with compilers.  ...  In making it easy for anyone to experiment with compilers - irrespective of their background - we aim to accelerate progress in the AI and compiler research domains.  ...  Other reinforcement learning compiler works include MLGO [3] which learns a policy for LLVM's function inling heuristic, NeuroVectorizer [5] which formulates the problem of instruction vectorization  ... 
arXiv:2109.08267v2 fatcat:s2a3qrrk7zczflszoeta3ztl7q