Filters








2 Hits in 1.6 sec

Circumventing Outliers of AutoAugment with Knowledge Distillation [article]

Longhui Wei, An Xiao, Lingxi Xie, Xin Chen, Xiaopeng Zhang, Qi Tian
2020 arXiv   pre-print
Upon the cooperation of knowledge distillation and AutoAugment, we claim the new state-of-the-art on ImageNet classification with a top-1 accuracy of 85.8%.  ...  To relieve the inaccuracy of supervision, we make use of knowledge distillation that refers to the output of a teacher model to guide network training.  ...  Acknowledgements We thank Jianzhong He for helping with setting up the parallelized training system. We thank Chunjing Xu, Wei Zhang, and Zhaowei Luo for coordinate hardware resource.  ... 
arXiv:2003.11342v1 fatcat:3kxcgwtxmrgzrcd47wpnm7hctm

Fast-Bonito: A Faster Basecaller for Nanopore Sequencing [article]

Zhimeng Xu, Yuting Mai, Denghui Liu, Wenjun He, Xinyuan Lin, Chi Xu, Lei Zhang, Xin Meng, Joseph Mafofo, Walid Abbas Zaher, Yi Li, Nan Qiao
2020 bioRxiv   pre-print
The accuracy of Fast-Bonito is also slightly higher than the original Bonito.  ...  Bonito is a recently developed basecaller based on deep neuron network, the neuron network architecture of which is composed of a single convolutional layer followed by three stacked bidirectional GRU  ...  Distilling the Knowledge in a Neural Network. ArXiv150302531 Cs Stat (2015). 25. Wei, L. et al. Circumventing Outliers of AutoAugment with Knowledge Distillation. ArXiv200311342 Cs (2020). 26.  ... 
doi:10.1101/2020.10.08.318535 fatcat:g56ixkz3kbckpctoc3jlb3kfo4