TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

Pengcheng Yin, Graham Neubig, Wen-tau Yih, Sebastian Riedel
2020 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics   unpublished
Recent years have witnessed the burgeoning of pretrained language models (LMs) for textbased natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TABERT, a pretrained LM that jointly learns representations for NL sentences and
more » ... ces and (semi-)structured tables. TABERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TABERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WIKITABLEQUESTIONS, while performing competitively on the text-to-SQL dataset SPIDER. 1 * Work done while at Facebook AI Research. . 2019a. Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers. ArXiv, abs/1911.04942. Bailin Wang, Ivan Titov, and Mirella Lapata. 2019b. Learning semantic parsers from denotations with latent structured alignments and abstract programs. In EMNLP/IJCNLP.
doi:10.18653/v1/2020.acl-main.745 fatcat:yi4etcdycjeqniuyju2nov3ykq