A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Pentagon at MEDIQA 2019: Multi-task Learning for Filtering and Re-ranking Answers using Language Inference and Question Entailment
2019
Proceedings of the 18th BioNLP Workshop and Shared Task
In this work, we introduce an end-to-end system, trained in a multi-task setting, to filter and re-rank answers in medical domain. We use task-specific pre-trained models as deep feature extractors. ...
Our model achieves the highest Spearman's Rho and Mean Reciprocal Rank of 0.338 and 0.9622 respectively, on the ACL-BioNLP workshop MediQA Question Answering shared-task. ...
To train the Natural Language Inference and Question Entailment module of our system we again use the data from MediQA shared task . ...
doi:10.18653/v1/w19-5041
dblp:conf/bionlp/PugaliyaSGSGNM19
fatcat:47qaanpb7jfxrofpdwuvzf5jbe
Overview of the MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and Question Answering
2019
Proceedings of the 18th BioNLP Workshop and Shared Task
MEDIQA 2019 includes three tasks: Natural Language Inference (NLI), Recognizing Question Entailment (RQE), and Question Answering (QA) in the medical domain. 72 teams participated in the challenge, achieving ...
This paper presents the MEDIQA 2019 shared task organized at the ACL-BioNLP workshop. ...
We would like to thank Sharada Mohanty, CEO and co-founder of AIcrowd, and Yassine Mrabet from the NLM for his support with the CHiQA system. ...
doi:10.18653/v1/w19-5039
dblp:conf/bionlp/AbachaSD19
fatcat:fa2z477k7jfitkd6yfgksyrofa
Surf at MEDIQA 2019: Improving Performance of Natural Language Inference in the Clinical Domain by Adopting Pre-trained Language Model
2019
Proceedings of the 18th BioNLP Workshop and Shared Task
The lack of large datasets and the pervasive use of domain-specific language (i.e. abbreviations and acronyms) in the clinical domain causes slower progress in NLP tasks than that of the general NLP tasks ...
To fill this gap, we employ word/subword-level based models that adopt large-scale data-driven methods such as pretrained language models and transfer learning in analyzing text for the clinical domain ...
Acknowledgments We sincerely thank the reviewers for their in depth feedback that helped improve the paper. K. ...
doi:10.18653/v1/w19-5043
dblp:conf/bionlp/NamYJ19
fatcat:kj7o7rec6fgcnps2rxfsgk5re4