A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Beyond Preserved Accuracy: Evaluating Loyalty and Robustness of BERT Compression
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
unpublished
Recent studies on compression of pretrained language models (e.g., BERT) usually use preserved accuracy as the metric for evaluation. In this paper, we propose two new metrics, label loyalty and probability loyalty that measure how closely a compressed model (i.e., student) mimics the original model (i.e., teacher). We also explore the effect of compression with regard to robustness under adversarial attacks. We benchmark quantization, pruning, knowledge distillation and progressive module
doi:10.18653/v1/2021.emnlp-main.832
fatcat:mxpovceylzdefbktsqxr6b53uu