AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks [article]

Chin-Lun Fu, Zih-Ching Chen, Yun-Ru Lee, Hung-yi Lee
<span title="2022-04-30">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Transformer-based pre-trained models with millions of parameters require large storage. Recent approaches tackle this shortcoming by training adapters, but these approaches still require a relatively large number of parameters. In this study, AdapterBias, a surprisingly simple yet effective adapter architecture, is proposed. AdapterBias adds a token-dependent shift to the hidden output of transformer layers to adapt to downstream tasks with only a vector and a linear layer. Extensive
more &raquo; ... are conducted to demonstrate the effectiveness of AdapterBias. The experiments show that our proposed method can dramatically reduce the trainable parameters compared to the previous works with a minimal decrease in task performances compared with fine-tuned pre-trained models. We further find that AdapterBias automatically learns to assign more significant representation shifts to the tokens related to the task in consideration.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="">arXiv:2205.00305v1</a> <a target="_blank" rel="external noopener" href="">fatcat:xgb2jjg2onavrox7mocz4duayy</a> </span>
<a target="_blank" rel="noopener" href="" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="" title=" access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> </button> </a>