When BERT Plays the Lottery, All Tickets Are Winning [article]

Sai Prasanna, Anna Rogers, Anna Rumshisky
2020 arXiv   pre-print
Large Transformer-based models were shown to be reducible to a smaller number of self-attention heads and layers. We consider this phenomenon from the perspective of the lottery ticket hypothesis, using both structured and magnitude pruning. For fine-tuned BERT, we show that (a) it is possible to find subnetworks achieving performance that is comparable with that of the full model, and (b) similarly-sized subnetworks sampled from the rest of the model perform worse. Strikingly, with structured
more » ... runing even the worst possible subnetworks remain highly trainable, indicating that most pre-trained BERT weights are potentially useful. We also study the "good" subnetworks to see if their success can be attributed to superior linguistic knowledge, but find them unstable, and not explained by meaningful self-attention patterns.
arXiv:2005.00561v2 fatcat:n4h3va3rdnevxmmw3c4ealynpm