Filters








395 Hits in 6.3 sec

Retrieving and Reading: A Comprehensive Survey on Open-domain Question Answering [article]

Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming Zheng, Soujanya Poria, Tat-Seng Chua
2021 arXiv   pre-print
Open-domain Question Answering (OpenQA) is an important task in Natural Language Processing (NLP), which aims to answer a question in the form of natural language based on large-scale unstructured documents  ...  ACKNOWLEDGEMENTS This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative and A*STAR under its RIE  ...  When ambiguity is detected in the question, the conversational OpenQA system is expected to raise a follow-up question for clarification, such as "Do you mean the basketball player?".  ... 
arXiv:2101.00774v3 fatcat:6evkg5cikjdp5fsi3ou3iqqkyq

Learning to Rank Answers to Non-Factoid Questions from Web Collections

Mihai Surdeanu, Massimiliano Ciaramita, Hugo Zaragoza
2011 Computational Linguistics  
We show that it is possible to exploit existing large collections of question-answer pairs (from online social Question Answering sites) to extract such features and train ranking models which combine  ...  This work investigates the use of linguistically motivated features to improve search, in particular for ranking answers to non-factoid questions.  ...  This is equal for all re-rankers.  ... 
doi:10.1162/coli_a_00051 fatcat:l6eao4y535hljip2jr3auze44m

Topic Transferable Table Question Answering [article]

Saneem Ahmed Chemmengath, Vishwajeet Kumar, Samarth Bharadwaj, Jaydeep Sen, Mustafa Canim, Soumen Chakrabarti, Alfio Gliozzo, Karthik Sankaranarayanan
2021 arXiv   pre-print
Weakly-supervised table question-answering(TableQA) models have achieved state-of-art performance by using pre-trained BERT transformer to jointly encoding a question and a table to produce structured  ...  In response, we propose T3QA (Topic Transferable Table Question Answering) a pragmatic adaptation framework for TableQA comprising of: (1) topic-specific vocabulary injection into BERT, (2) a novel text-to-text  ...  Further, TableQA may involve complex questions with multi-cell or aggregate answers.  ... 
arXiv:2109.07377v1 fatcat:4y4y5gj2tfdz7iysutdl2coo4m

Novelty based Ranking of Human Answers for Community Questions

Adi Omari, David Carmel, Oleg Rokhlenko, Idan Szpektor
2016 Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval - SIGIR '16  
Yet, especially when many answers are provided, the viewer may not want to sift through all answers but to read only the top ones.  ...  This approach is common in Web search and information retrieval, yet it was not addressed within the CQA settings before, which is quite different from classic document retrieval.  ...  We consider all the answers for a question to be relevant to some extent [3] . This does not mean that all the answer text is relevant to the question.  ... 
doi:10.1145/2911451.2911506 dblp:conf/sigir/OmariCRS16 fatcat:pn43qih27rhwnhbffixa2pnobi

Question Answering on Freebase via Relation Extraction and Textual Evidence [article]

Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, Dongyan Zhao
2016 arXiv   pre-print
Existing knowledge-based question answering systems often rely on small annotated training data.  ...  Experiments on the WebQuestions question answering dataset show that our method achieves an F_1 of 53.3%, a substantial improvement over the state-of-the-art.  ...  This work is supported by National High Technology R&D Program of China (Grant No. 2015AA015403, 2014AA015102), Natural Science Foundation of China (Grant No. 61202233, 61272344, 61370055) and the joint  ... 
arXiv:1603.00957v3 fatcat:j66yh4uz2rgizhkplihr7ivsii

"Can you give me another word for hyperbaric?": Improving speech translation using targeted clarification questions

Necip Fazil Ayan, Arindam Mandal, Michael Frandsen, Jing Zheng, Peter Blasco, Andreas Kathol, Frederic Bechet, Benoit Favre, Alex Marin, Tom Kwiatkowski, Mari Ostendorf, Luke Zettlemoyer (+3 others)
2013 2013 IEEE International Conference on Acoustics, Speech and Signal Processing  
This task can get complicated, depending on the complexity of the user responses to clarification questions, with several possibilities: (1) answer exactly fits the error segment; (2) answer is anchored  ...  Answer Extraction and Merging The answer extraction and merging module is responsible for creating a corrected utterance given the initial user utterance and the answer to a clarification question.  ... 
doi:10.1109/icassp.2013.6639302 dblp:conf/icassp/AyanMFZBKBFMKOZSHS13 fatcat:3rafhvtmfnczdbekpmqupigi6u

Automatic question generation for supporting argumentation

Nguyen-Thinh Le, Nhu-Phuong Nguyen, Kazuhisa Seta, Niels Pinkwart
2014 Vietnam Journal of Computer Science  
Can questions which are semantically related to a given discussion topic help students develop further arguments?  ...  In this paper, we introduce a technical approach to generating questions upon the request of students during the process of collaborative argumentation.  ...  Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s)  ... 
doi:10.1007/s40595-014-0014-9 fatcat:62zk7qslkrfnrb5for6onjy4he

Question Answering over Electronic Devices: A New Benchmark Dataset and a Multi-Task Learning based QA Framework [article]

Abhilash Nandy, Soumya Sharma, Shubham Maddhashiya, Kapil Sachdeva, Pawan Goyal, Niloy Ganguly
2021 arXiv   pre-print
We introduce EMQAP (E-Manual Question Answering Pipeline) that answers questions pertaining to electronics devices.  ...  Answering questions asked from instructional corpora such as E-manuals, recipe books, etc., has been far less studied than open-domain factoid context-based question answering.  ...  This work is supported in part by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKI-Labor (grant no. 01DD20003).  ... 
arXiv:2109.05897v2 fatcat:7omvwogzijaehaerppiw42m2my

Question Answering on Freebase via Relation Extraction and Textual Evidence

Kun Xu, Siva Reddy, Yansong Feng, Songfang Huang, Dongyan Zhao
2016 Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
Existing knowledge-based question answering systems often rely on small annotated training data.  ...  Experiments on the WebQuestions question answering dataset show that our method achieves an F 1 of 53.3%, a substantial improvement over the state-of-the-art.  ...  This work is supported by National High Technology R&D Program of China (Grant No. 2015AA015403, 2014AA015102), Natural Science Foundation of China (Grant No. 61202233, 61272344, 61370055) and the joint  ... 
doi:10.18653/v1/p16-1220 dblp:conf/acl/XuRFHZ16 fatcat:4dol5nfvsjd77egkmxi3gy34hu

The Big Three: A Methodology to Increase Data Science ROI by Answering the Questions Companies Care About [article]

Daniel K. Griffin
2020 arXiv   pre-print
In this paper, we propose a methodology for categorizing and answering 'The Big Three' questions (what is going on, what is causing it, and what actions can I take that will optimize what I care about)  ...  Yet, data scientists seem to be solely focused on using classification, regression, and clustering methods to answer the question 'what is going on'.  ...  There is no unblocked path from X to Y . 3. All backdoor paths from Y to Z are blocked by X.  ... 
arXiv:2002.07069v1 fatcat:if3das3tpbeuba33gmsnmvulrm

Training Curricula for Open Domain Answer Re-Ranking [article]

Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, Ophir Frieder
2020 arXiv   pre-print
In precision-oriented tasks like answer ranking, it is more important to rank many relevant answers highly than to retrieve all relevant answers.  ...  In this work, we apply this idea to the training of neural answer rankers using curriculum learning. We propose several heuristics to estimate the difficulty of a given training sample.  ...  We use these metrics rather than MRR and P@1 because CAR queries often need many relevant passages to answer the question, not just one.  ... 
arXiv:2004.14269v1 fatcat:7ivxm62ggbcrxnmxyccgs24nqq

Neural Approaches to Conversational AI

Jianfeng Gao, Michel Galley, Lihong Li
2018 The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval - SIGIR '18  
all questions in QA on KB • Short-term memory: encode the passage(s) which contains the answer of a question in QA on Text • Working memory (hidden state ) contains a description of the current state  ...  : task-specific semantic space 27 Learning an answer ranker from labeled QA pairs • Consider a query and two candidate answers + and − • Assume + is more relevant than − with respect to • sim  ...  No grounding into a real calendar, but the "shape" of the conversation is fluent and plausible…  ... 
doi:10.1145/3209978.3210183 dblp:conf/sigir/GaoG018 fatcat:pnhrb5jgdfgnxac3hxy52a65pm

Self-Consistency Improves Chain of Thought Reasoning in Language Models [article]

Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, Denny Zhou
2022 arXiv   pre-print
The idea is to sample a diverse set of reasoning paths from a language model via chain of thought prompting then return the most consistent final answer in the set.  ...  We evaluate self-consistency on a range of arithmetic and commonsense reasoning benchmarks, and find that it robustly improves accuracy across a variety of language models and model scales without the need  ...  If you choose 2 oranges, you have 10C2 = 45 ways of choosing 2 oranges. So the answer is (a). [CommonsenseQA] The man laid on the soft moss and looked up at the trees, where was the man?  ... 
arXiv:2203.11171v2 fatcat:r6elgugparaetdwwhhke2sbfpe

Social Relationship Identification: An Example of Social Query

Christopher P. Diehl, Jaime Montemayor, Mike Pekala
2009 2009 International Conference on Computational Science and Engineering  
When we have a question, we think of key words or phrases that will allow a search engine to find artifacts containing the answer.  ...  To understand the scope and complexity of the problem, one need look no further than the Enron scandal.  ... 
doi:10.1109/cse.2009.211 dblp:conf/cse/DiehlMP09 fatcat:sqy7ggslhzextkzh3ztaj3iubm

Effective and practical neural ranking

Sean MacAvaney
2021 SIGIR Forum  
I find that this approach is neither limited to the task of ad-hoc ranking (as demonstrated by ranking clinical reports) nor English content (as shown by training effective cross-lingual neural rankers  ...  All answers to the question must be about this entity, otherwise the answer is not valid. The facet is the particular detail about which the question inquires.  ...  Methodology As mentioned in Section 2.3.1, complex answer retrieval (CAR) is a new IR task focused on retrieving complex answers to questions that include a topic and facet.  ... 
doi:10.1145/3476415.3476432 fatcat:fdjy53sggvhgxo5fa5hzpede2i
« Previous Showing results 1 — 15 out of 395 results