Filters








2 Hits in 2.0 sec

A Survey on Dynamic Neural Networks for Natural Language Processing [article]

Canwen Xu, Julian McAuley
2022 arXiv   pre-print
In this survey, we summarize progress of three types of dynamic neural networks in NLP: skimming, mixture of experts, and early exit.  ...  Dynamic neural networks could be a promising solution to the growing parameter numbers of pretrained language models, allowing both model pretraining with trillions of parameters and faster inference on  ...  ., 2016] to BERT inference. The training for DeeBERT is two-stage: they first train BERT on downstream tasks following standard fine-tuning.  ... 
arXiv:2202.07101v1 fatcat:c62x43swubhwzlfsw44cax7e5q

Consistent Accelerated Inference via Confident Adaptive Transformers [article]

Tal Schuster, Adam Fisch, Tommi Jaakkola, Regina Barzilay
2021 arXiv   pre-print
Our method trains additional prediction heads on top of intermediate layers, and dynamically decides when to stop allocating computational effort to each input using a meta consistency classifier.  ...  To calibrate our early prediction stopping rule, we formulate a unique extension of conformal prediction. We demonstrate the effectiveness of this approach on four classification and regression tasks.  ...  Romebert: Robust training of multi- exit bert. Alex Graves. 2017. Adaptive computation time for re- current neural networks. Antonio Gulli. 2004. Ag's corpus of news articles.  ... 
arXiv:2104.08803v2 fatcat:kvm3zqg4f5a4bh56q2ryofl4pm