Efficient Retrieval Optimized Multi-task Learning [article]

Hengxin Fun, Sunil Gandhi, Sujith Ravi
2021 arXiv   pre-print
Recently, there have been significant advances in neural methods for tackling knowledge-intensive tasks such as open domain question answering (QA). These advances are fueled by combining large pre-trained language models with learnable retrieval of documents. Majority of these models use separate encoders for learning query representation, passage representation for the retriever and an additional encoder for the downstream task. Using separate encoders for each stage/task occupies a lot of
more » ... ory and makes it difficult to scale to a large number of tasks. In this paper, we propose a novel Retrieval Optimized Multi-task (ROM) framework for jointly training self-supervised tasks, knowledge retrieval, and extractive question answering. Our ROM approach presents a unified and generalizable framework that enables scaling efficiently to multiple tasks, varying levels of supervision, and optimization choices such as different learning schedules without changing the model architecture. It also provides the flexibility of changing the encoders without changing the architecture of the system. Using our framework, we achieve comparable or better performance than recent methods on QA, while drastically reducing the number of parameters.
arXiv:2104.10129v1 fatcat:br6yaaumfncdrck2wxmodmihoq