2 Hits in 5.8 sec

AntMan: Sparse Low-Rank Compression to Accelerate RNN inference [article]

Samyam Rajbhandari, Harsh Shrivastava, Yuxiong He
2019 arXiv   pre-print
To address this issue, we develop AntMan, combining structured sparsity with low-rank decomposition synergistically, to reduce model computation, size and execution time of RNNs while attaining desired  ...  AntMan extends knowledge distillation based training to learn the compressed models efficiently.  ...  Conclusion We develop AntMan, a technique that combines structured sparsity and low-rank decomposition to compress dense matrix-multiplications.  ... 
arXiv:1910.01740v1 fatcat:cs2fz54ncnft3pay3ckseuy3cm

ADVCOMP 2014 Technical Program Committee

Danny Krizanc, Ivan Rodero, Jerry Trahan, Wenbing Zhao, Dmitry Fedosov, Juelich Gmbh, Alice Germany, Lawrence Koniges, Laboratory, Nersc, Markus Usa, Kunde (+106 others)
2014 ADVCOMP   unpublished
We gratefully appreciate to the technical program committee co-chairs that contributed to identify the appropriate groups to submit contributions.  ...  We also kindly thank all the authors that dedicated much of their time and efforts to contribute to ADVCOMP 2014.  ...  (r j ) is the No. j correct result's ranking position, so the average ranking value is calculated as follows: Average-r = 1/m  m rank(r j ) (8) j=1 This value reflects the average ranking of query in  ...