A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
The Natural Language Decathlon: Multitask Learning as Question Answering
[article]
2018
arXiv
pre-print
We introduce the Natural Language Decathlon (decaNLP), a challenge that spans ten tasks: question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic ...
We cast all tasks as question answering over a context. ...
Conclusion We introduced the Natural Language Decathlon (decaNLP), a new benchmark for measuring the performance of NLP models across ten tasks that appear disparate until unified as question answering ...
arXiv:1806.08730v1
fatcat:pdvwr3fqfrdnjdzwotzahsjf3e
Nonclassical connectionism should enter the decathlon
2003
Behavioral and Brain Sciences
We have distilled these into 12 criteria: flexible behavior, real-time performance, adaptive behavior, vast knowledge base, dynamic behavior, knowledge integration, natural language, learning, development ...
The strengths of classical connectionism on this test derive from its intense effort in addressing empirical phenomena in such domains as language and cognitive development. ...
As a partial but significant test, we suggest looking at those tests that society has set up as measures of language processing -something like the task of reading a passage and answering questions on ...
doi:10.1017/s0140525x03240139
fatcat:qzqkqm6ejvelzostaoc4lfp6ni
Multi-task learning for natural language processing in the 2020s: where are we going?
2020
Pattern Recognition Letters
the release of new challenge problems, such as GLUE and the NLP Decathlon (decaNLP). ...
Multi-task learning (MTL) significantly pre-dates the deep learning era, and it has seen a resurgence in the past few years as researchers have been applying MTL to deep learning solutions for natural ...
All inputs and tasks are modeled as natural language questions and outputs in the form of a natural language answer. ...
doi:10.1016/j.patrec.2020.05.031
fatcat:duze6ovk5fauxfn6vsbn5bgrbq
Ask me in your own words: paraphrasing for multitask question answering
2021
PeerJ Computer Science
Multitask learning has led to significant advances in Natural Language Processing, including the decaNLP benchmark where question answering is used to frame 10 natural language understanding tasks in a ...
This enables analysis of how transformations such as swapping the class labels and changing the sentence modality lead to a large performance degradation. ...
McCann et al. (2018) developed a new NLP benchmark: the Natural Language Decathlon (decaNLP). ...
doi:10.7717/peerj-cs.759
pmid:34805510
pmcid:PMC8576550
fatcat:grwv562m7napjji22ti6cuts2q
Question Answering in the Biomedical Domain
2019
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Question answering focusing on patients is less studied. We find that there are some challenges in patient question answering such as limited annotated data, lexical gap and quality of answer spans. ...
We aim to address some of these gaps by extending and developing upon the literature to design a question answering system that can decide on the most appropriate answers for patients attempting to self-diagnose ...
The natural language decathlon: Multitask learning as question answering. Computing Research Repository, abs/1806.08730. Ryan McDonald, Georgios-Ioannis Brokos, and Ion Androutsopoulos. 2018. ...
doi:10.18653/v1/p19-2008
dblp:conf/acl/Nguyen19
fatcat:23yy6eohofh7tepzp4iixore2e
The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents
[article]
2020
arXiv
pre-print
We introduce dodecaDialogue: a set of 12 tasks that measures if a conversational agent can communicate engagingly with personality and empathy, ask questions, answer questions by utilizing knowledge resources ...
We obtain state-of-the-art results on many of the tasks, providing a strong baseline for this challenge. ...
The natural language de-
cathlon: Multitask learning as question answering.
arXiv preprint arXiv:1806.08730. ...
arXiv:1911.03768v2
fatcat:xafwif4fbfaptm3jho67tukpeq
Who's on First?: Probing the Learning and Representation Capabilities of Language Models on Deterministic Closed Domains
2021
Proceedings of the 25th Conference on Computational Natural Language Learning
unpublished
The natural language decathlon: Ashish Vaswani, Noam M. Shazeer, Niki Parmar,
Multitask learning as question answering. ArXiv, Jakob Uszkoreit, Llion Jones, Aidan N. ...
models on partial games with question-answer
We probe the learning and representation capabil- pairs. ...
doi:10.18653/v1/2021.conll-1.16
fatcat:tfogkpv6ubgh3lzvp7i3ade7ay
Efficient Meta Lifelong-Learning with Limited Memory
[article]
2020
arXiv
pre-print
Current natural language processing models work well on a single task, yet they often fail to continuously learn new tasks without forgetting previous ones as they are re-trained throughout their lifetime ...
Extensive experiments on text classification and question answering benchmarks demonstrate the effectiveness of our framework by achieving state-of-the-art performance using merely 1% memory size and narrowing ...
The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. James L McClelland, Bruce L McNaughton, and Randall C O'Reilly. 1995. ...
arXiv:2010.02500v1
fatcat:qoukswlygbgmvp7slixztamxiq
Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections
[article]
2021
arXiv
pre-print
Large pre-trained language models (LMs) such as GPT-3 have acquired a surprising ability to perform zero-shot learning. ...
When evaluated on unseen tasks, meta-tuned models outperform a same-sized QA model and the previous SOTA zero-shot learning system based on natural language inference. ...
Acknowledgements We thank Eric Wallace for his feedbacks throughout the project. ...
arXiv:2104.04670v5
fatcat:nicxnnusjjg3jdyqch6nzy7y5m
Event Extraction by Answering (Almost) Natural Questions
2020
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
unpublished
To avoid this issue, we introduce a new paradigm for event extraction by formulating it as a question answering (QA) task that extracts the event arguments in an end-to-end manner. ...
., in a zeroshot learning setting). 1 ...
Acknowledgments We thank the anonymous reviewers and Heng Ji for helpful suggestions. This research is based on work supported in part by DARPA LwLL Grant FA8750-19-2-0039. ...
doi:10.18653/v1/2020.emnlp-main.49
fatcat:r4g4lsaf4nhg7bs54tkky2b5v4
Rethinking Search: Making Experts out of Dilettantes
[article]
2021
arXiv
pre-print
Successful question answering systems offer a limited corpus created on-demand by human experts, which is neither timely nor scalable. ...
This paper examines how ideas from classical information retrieval and large pre-trained language models can be synthesized and evolved into systems that truly deliver on the promise of expert advice. ...
Annual Meeting of the Association for Computational Linguistics, Volume 1 (Long
The Natural Language Decathlon: Multitask Learning as Question Answering. ...
arXiv:2105.02274v1
fatcat:qdghlnv2nnfhnoo6eafdaxqxzy
Framework for Deep Learning-Based Language Models using Multi-task Learning in Natural Language Understanding: A Systematic Literature Review and Future Directions
2022
IEEE Access
Learning human languages is a difficult task for a computer. However, Deep Learning (DL) techniques have enhanced performance significantly for almost all-natural language processing (NLP) tasks. ...
NLU (Natural Language Understanding) is a subset of NLP including tasks, like machine translation, dialogue-based systems, natural language inference, text entailment, sentiment analysis, etc. ...
All the NLP tasks, typically QA, content summarization, NLI, formulate the candidate tasks are considered a benchmark for the Natural Language Decathlon (decaNLP). [74] . ...
doi:10.1109/access.2022.3149798
fatcat:k3kdt4eryzdfpk5k6w62jtlskm
CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP
[article]
2021
arXiv
pre-print
Humans can learn a new language task efficiently with only few examples, by leveraging their knowledge obtained when learning prior tasks. ...
Our analysis reveals that the few-shot learning ability on unseen tasks can be improved via an upstream learning stage using a set of seen tasks. ...
The natural language decathlon:
Multitask learning as question answering. ArXiv,
abs/1806.08730.
Clara H. McCreery, Namit Katariya, Anitha Kannan,
Manish Chablani, and Xavier Amatriain. 2020. ...
arXiv:2104.08835v2
fatcat:xnhrmmsmyzb4fjo7ealrw2vnka
Efficient Meta Lifelong-Learning with Limited Memory
2020
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
unpublished
Current natural language processing models work well on a single task, yet they often fail to continuously learn new tasks without forgetting previous ones as they are re-trained throughout their lifetime ...
Extensive experiments on text classification and question answering benchmarks demonstrate the effectiveness of our framework by achieving state-of-the-art performance using merely 1% memory size and narrowing ...
The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730. James L McClelland, Bruce L McNaughton, and Randall C O'Reilly. 1995. ...
doi:10.18653/v1/2020.emnlp-main.39
fatcat:2hfhyh5hczbxtl3ulhvkmcfthy
Medication Regimen Extraction From Medical Conversations
[article]
2020
arXiv
pre-print
We frame the problem as a Question Answering (QA) task and perform comparative analysis over: a QA approach, a new combined QA and Information Extraction approach, and other baselines. ...
Compared to the baseline, our best-performing models improve the dosage and frequency extractions' ROUGE-1 F1 scores from 54.28 and 37.13 to 89.57 and 45.94, respectively. ...
Acknowledgements We thank: University of Pittsburgh Medical Center (UPMC) and Abridge AI Inc. for providing access to the de-identified data corpus; Dr. ...
arXiv:1912.04961v3
fatcat:rztw3fkinjexznagiptdpf6ynu
« Previous
Showing results 1 — 15 out of 43 results