Filters








562 Hits in 4.5 sec

Narrative Question Answering with Cutting-Edge Open-Domain QA Techniques: A Comprehensive Study

Xiangyang Mou, Chenghao Yang, Mo Yu, Bingsheng Yao, Xiaoxiao Guo, Saloni Potdar, Hui Su
2021 Transactions of the Association for Computational Linguistics  
Recent advancements in open-domain question answering (ODQA), that is, finding answers from large open-domain corpus like Wikipedia, have led to human-level performance on many datasets.  ...  human studies.1 Our findings indicate that the event-centric questions dominate this task, which exemplifies the inability of existing QA models to handle event-oriented scenarios.  ...  Acknowledgments This work is funded by RPI-CISL, a center in IBM's AI Horizons Network, and the Rensselaer-IBM AI Research Collaboration (RPI-AIRC).  ... 
doi:10.1162/tacl_a_00411 fatcat:khhhxz2wqrdoja4gvtx5enx32a

A Neural Comprehensive Ranker (NCR) for Open-Domain Question Answering [article]

Bin Bi, Hao Ma
2017 arXiv   pre-print
This paper proposes a novel neural machine reading model for open-domain question answering at scale.  ...  A Q&A system based on this framework allows users to issue an open-domain question without needing to provide a piece of text that must contain the answer.  ...  Answer Output Layer produces the start and end indices of the answer span in a paragraph for a given question. 7.  ... 
arXiv:1709.10204v2 fatcat:zx466rb4c5dztcfmxe67yqid6e

Analyzing Linguistic Features for Answer Re-Ranking of Why-Questions

Manvi Breja, Sanjay Kumar Jain
2022 Journal of Cases on Information Technology  
candidate being relevant for the question.  ...  An answer re-ranker model is implemented that finds the highest ranked answer comprising largest value of feature similarity between question and answer candidate and thus achieving 0.64 Mean Reciprocal  ...  The model was trained on open-domain Yahoo!  ... 
doi:10.4018/jcit.20220701.oa10 fatcat:rd2wqn3p2ndadnh4qhqqmmta2i

Knowledge-Driven Distractor Generation for Cloze-style Multiple Choice Questions [article]

Siyu Ren, Kenny Q. Zhu
2020 arXiv   pre-print
In this paper, we propose a novel configurable framework to automatically generate distractive choices for open-domain cloze-style multiple-choice questions, which incorporates a general-purpose knowledge  ...  This dataset can also be used as a benchmark for distractor generation in the future.  ...  Acknowledgement Sincere gratitude goes to Xinzhu (Elena) Cai for her initial contribution and subsequent assistance in preparing this paper.  ... 
arXiv:2004.09853v3 fatcat:rmdmsonfxfcwnd6sl5jy4z6d7y

COBERT: COVID-19 Question Answering System Using BERT

Jafar A. Alzubi, Rachna Jain, Anubhav Singh, Pritee Parwekar, Meenu Gupta
2021 Arabian Journal for Science and Engineering  
which compares the logits scores to produce a short answer, title of the paper and source article of extraction.  ...  Taking these challenges into account we have proposed COBERT: a retriever-reader dual algorithmic system that answers the complex queries by searching a document of 59K corona virus-related literature  ...  A collection of 47K medical question answer-pairs was built and shared. This approach works well for both open-domain and specific-domain.  ... 
doi:10.1007/s13369-021-05810-5 pmid:34178569 pmcid:PMC8220121 fatcat:dx4dkreeaba57gzayjxbnsinoy

Recent Trends in Deep Learning Based Open-Domain Textual Question Answering Systems

Zhen Huang, Shiyi Xu, Minghao Hu, Xinyi Wang, Jinyan Qiu, Yongquan Fu, Yuncai Zhao, Yuxing Peng, Changjian Wang
2020 IEEE Access  
Open-domain textual question answering (QA), which aims to answer questions from large data sources like Wikipedia or the web, has gained wide attention in recent years.  ...  INDEX TERMS Open-domain textual question answering, deep learning, machine reading comprehension, information retrieval.  ...  In the following year, at the 38th ACL conference, a special discussion topic ''Open-domain Question Answering'' was opened up.  ... 
doi:10.1109/access.2020.2988903 fatcat:po4euxfronf3pob52qc2wcgrre

Standing on the Shoulders of Giant Frozen Language Models [article]

Yoav Levine, Itay Dalmedigos, Ori Ram, Yoel Zeldes, Daniel Jannai, Dor Muhlgay, Yoni Osin, Opher Lieber, Barak Lenz, Shai Shalev-Shwartz, Amnon Shashua, Kevin Leyton-Brown (+1 others)
2022 arXiv   pre-print
However, current leading techniques for leveraging a "frozen" LM -- i.e., leaving its weights untouched -- still often underperform fine-tuning approaches which modify these weights in a task-dependent  ...  fine tuning in challenging domains without sacrificing the underlying model's versatility.  ...  A FROZEN LM READER FOR OPEN-DOMAIN QUESTION ANSWERING The dominant approach for performing open-domain question answering (ODQA) is the retrieve-read framework (Chen et al., 2017; , also referred to as  ... 
arXiv:2204.10019v1 fatcat:2qbkbgljifgalga6c7vadswwne

Multi-style Generative Reading Comprehension

Kyosuke Nishida, Itsumi Saito, Kosuke Nishida, Kazutoshi Shinoda, Atsushi Otsuka, Hisako Asano, Junji Tomita
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
We propose a multi-style abstractive summarization model for question answering, called Masque. The proposed model has two key characteristics.  ...  Second, whereas previous studies built a specific model for each answer style because of the difficulty of acquiring one general model, our approach learns multi-style answers within a model to improve  ...  Generative models suffer from a dearth of training data to cover open-domain questions.  ... 
doi:10.18653/v1/p19-1220 dblp:conf/acl/NishidaSNSOAT19 fatcat:qw6qv34umfcwxa5xmn2kh4nmxi

RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering [article]

Srinivasan Iyer, Sewon Min, Yashar Mehdad, Wen-tau Yih
2020 arXiv   pre-print
State-of-the-art Machine Reading Comprehension (MRC) models for Open-domain Question Answering (QA) are typically trained for span selection using distantly supervised positive examples and heuristically  ...  This training scheme possibly explains empirical observations that these models achieve a high recall amongst their top few predictions, but a low overall accuracy, motivating the need for answer re-ranking  ...  Introduction Open-domain Question Answering (Voorhees et al., 1999) (QA) involves answering questions by extracting correct answer spans from a large corpus of passages, and is typically accomplished  ... 
arXiv:2010.10757v1 fatcat:ti5xmklgxbbg3b7apvciucbsi4

Crawler for Efficiently Harvesting Web

K Praveen Kumar
2017 International Journal of Communication Technology for Social Networking Services  
To attain a lot of correct results for a targeted crawl, smartcrawlerranks websites to place extremely relevant ones for a given topic.  ...  Our experimental results on a group of representative domains show the lightness and accuracy of our projected crawler framework that efficiently retrieves deep-web interfaces from largescale sites and  ...  We are able to categorise QA into Open-Domain [4] , Definitional QA [4] , and List QA [6] .  ... 
doi:10.21742/ijctsns.2017.5.1.02 fatcat:7big4ws6yba4dfwjbk4kdubdbq

MIX : a Multi-task Learning Approach to Solve Open-Domain Question Answering [article]

Sofian Chaybouti, Achraf Saghe, Aymen Shabou
2021 arXiv   pre-print
In this paper, we introduce MIX : a multi-task deep learning approach to solve Open-Domain Question Answering.  ...  Our system is on par with state-of-the-art performances on the squad-open benchmark while being simpler conceptually.  ...  One could try to apply QA models for the Open-Domain Question Answering paradigm which aims to answer questions taking a big amount of documents as knowledge source.  ... 
arXiv:2012.09766v2 fatcat:gyssch45vvemdlot5k5hhv3hxu

Answering questions by learning to rank - Learning to rank by answering questions

George Sebastian Pirtoaca, Traian Rebedea, Stefan Ruseti
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
Second, we propose a model employing the semantic ranking that holds the first place in two of the most popular leaderboards for answering multiple-choice questions: ARC Easy and Challenge.  ...  Answering multiple-choice questions in a setting in which no supporting documents are explicitly provided continues to stand as a core problem in natural language processing.  ...  Chen et al. (2017) proposed a model, called DrQA, that was trained on the SQuAD 1.1 dataset (Rajpurkar et al., 2016) to find the correct answer to open-domain questions.  ... 
doi:10.18653/v1/d19-1256 dblp:conf/emnlp/PirtoacaRR19 fatcat:nanrtg2ktvgmrdqi3m6drlukay

RocketQA: An Optimized Training Approach to Dense Passage Retrieval for Open-Domain Question Answering [article]

Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, Haifeng Wang
2021 arXiv   pre-print
In open-domain question answering, dense passage retrieval has become a new paradigm to retrieve relevant passages for finding answers.  ...  Typically, the dual-encoder architecture is adopted to learn dense representations of questions and passages for semantic matching.  ...  Introduction Open-domain question answering (QA) aims to find the answers to questions expressed in natural language from a large collection of documents.  ... 
arXiv:2010.08191v2 fatcat:abwwfka3svcsdfipm5454hfifm

DuReader_retrieval: A Large-scale Chinese Benchmark for Passage Retrieval from Web Search Engine [article]

Yifu Qiu, Hongyu Li, Yingqi Qu, Ying Chen, Qiaoqiao She, Jing Liu, Hua Wu, Haifeng Wang
2022 arXiv   pre-print
Additionally, we provide two out-of-domain testing sets for cross-domain evaluation, as well as a cross-lingual set that has been manually translated for cross-lingual retrieval.  ...  In this paper, we present DuReader-retrieval, a large-scale Chinese dataset for passage retrieval. DuReader-retrieval contains more than 90K queries and over 8M unique passages from Baidu search.  ...  Natural Questions (Kwiatkowski et al., 2019) is an open-domain question answering benchmarks that consist of real queries issued to the Google search engine.  ... 
arXiv:2203.10232v3 fatcat:hrjvgnejpzb4pky4xzc5the4hm

Answering questions by learning to rank – Learning to rank by answering questions [article]

George-Sebastian Pîrtoacă, Traian Rebedea, Stefan Ruseti
2019 arXiv   pre-print
Second, we propose a model employing the semantic ranking that holds the first place in two of the most popular leaderboards for answering multiple-choice questions: ARC Easy and Challenge.  ...  Answering multiple-choice questions in a setting in which no supporting documents are explicitly provided continues to stand as a core problem in natural language processing.  ...  Chen et al. (2017) proposed a model, called DrQA, that was trained on the SQuAD 1.1 dataset (Rajpurkar et al., 2016) to find the correct answer to open-domain questions.  ... 
arXiv:1909.00596v2 fatcat:x5epmmvu2vecleukpklnm4u4ui
« Previous Showing results 1 — 15 out of 562 results