2,380 Hits in 5.2 sec

TREC Complex Answer Retrieval Overview

Laura Dietz, Manisha Verma, Filip Radlinski, Nick Craswell
2017 Text Retrieval Conference  
Data Set Creation Pipeline The TREC Complex Answer Retrieval benchmark (v1.5) is derived from Wikipedia so that complex topics are chosen from articles on open information needs, i.e., not people, not  ...  Among them, challenges such as conversational answer retrieval, subdocument retrieval, and answer aggregation share commonalities: We desire answers to complex needs, and wish to find them in a single  ... 
dblp:conf/trec/DietzVRC17 fatcat:uhh4chhgpraktma37u5epypiou

On the evolution of the yahoo! answers QA community

Yandong Liu, Eugene Agichtein
2008 Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '08  
In our setting we compare the language models for, say, TREC factoid questions (P), and Yahoo! Answers questions (Q).  ...  Answers, with respect to its effectiveness in answering three basic types of questions: factoid, opinion and complex questions. Our experiments show that Yahoo!  ...  Answers question retrieval engine.  ... 
doi:10.1145/1390334.1390478 dblp:conf/sigir/LiuA08 fatcat:7vdmdx6cf5c7zdxdw2ryb23lsa

Question Answering Systems: Analysis and Survey

Eman Mohamed Nabil Alkholy, Mohamed Hassan Haggag, Amal Aboutabl
2018 International Journal of Computer Science & Engineering Survey  
this survey paper provides an overview on what Question-Answering is and its system ,as well as the previous related research with respect to approaches that were followed.  ...  computing environment In real world that using a computer to answer questions has been a human dream since the beginning of the digital era, Question-answering systems are referred to as intelligent systems  ...  In the next campaign, TREC-9 held in 2000, the number of questions and size of document collections were increased. In TREC-10 in 2001, a new complexity with respect to answers.  ... 
doi:10.5121/ijcses.2018.9601 fatcat:idiboompuvgpbk2ld2z5vpl4ca

IRIT at TREC 2019: Incident Streams and Complex Answer Retrieval Tracks

Alexis Dusart, Gilles Hubert, Karen Pinel-Sauvagnat
2019 Text Retrieval Conference  
This paper presents the approaches proposed by the IRIS team of the IRIT laboratory for the TREC Incident Streams and Complex Answer Retrieval tracks.  ...  Then, the 2019 edition of the Complex Answer Retrieval (CAR) track aims to answer complex questions expressed as outlines using paragraphs from Wikipedia.  ...  Random Forest gives us better results than linear regression to detect high priority tweets. 3 Complex Answer Retrieval Overview of the TREC CAR track Current retrieval systems provide good solutions  ... 
dblp:conf/trec/DusartHP19 fatcat:7le4ugxktza73czcl6dpuhcbom

Overview of TREC 2007

Ellen M. Voorhees
2007 Text Retrieval Conference  
My thanks to the coordinators who make the variety of different tasks addressed in TREC possible.  ...  Acknowledgements The track summaries in section 3 are based on the track overview papers authored by the track coordinators.  ...  The goal of the task is to extend systems' abilities to answer more complex information needs than those covered in the main task and to provide a limited form of interaction with the user in a QA setting  ... 
dblp:conf/trec/Voorhees07 fatcat:amjy53u3rfby7nt5ay4cq2hvty

Benchmark for Complex Answer Retrieval

Federico Nanni, Bhaskar Mitra, Matt Magnusson, Laura Dietz
2017 Proceedings of the ACM SIGIR International Conference on Theory of Information Retrieval - ICTIR '17  
Providing answers to complex information needs is a challenging task. e new TREC Complex Answer Retrieval (TREC CAR) track introduces a large-scale dataset where paragraphs are to be retrieved in response  ...  . e goal is to o er an overview of some promising approaches to tackle this problem.  ...  Answer Retrieval 1 (TREC CAR) for open-domain queries has been recently introduced [10] . e task and related dataset are based on the assumption that each Wikipedia page represents a complex topic, with  ... 
doi:10.1145/3121050.3121099 dblp:conf/ictir/NanniMMD17 fatcat:wtkbphutorcmvnhdz7mvtce6vq

A Brief Survey of Question Answering Systems

Michael Caballero
2021 International Journal of Artificial Intelligence & Applications  
This survey summarizes the history and current state of the field and is intended as an introductory overview of QA systems.  ...  Question Answering (QA) is a subfield of Natural Language Processing (NLP) and computer science focused on building systems that automatically answer questions from humans in natural language.  ...  , to complex queries that require explanation. This survey aims to provide a succinct overview of the history and current state of question answering systems.  ... 
doi:10.5121/ijaia.2021.12501 fatcat:jhydawao5vgaxoghxrp6v37f2q

Overview of the TREC 2006

Ellen M. Voorhees
2006 Text Retrieval Conference  
The track summaries in section 3 are based on the track overview papers authored by the coordinators.  ...  Acknowledgements Special thanks to the track coordinators who make the variety of different tasks addressed in TREC possible.  ...  The details of the evaluation methodology used in a particular track are described in the track's overview paper. TREC 2006 Tracks TREC's track structure was begun in TREC-3 (1994).  ... 
dblp:conf/trec/Voorhees06 fatcat:5olfha4lxvcqbahxz7rl6bldha

Using Profile Matching and Text Categorization for Answer Extraction in TREC Genomics

Haiqing Zheng, Chen Lin, Lishen Huang, Jun Xu, Jiaqian Zheng, Qi Sun, Junyu Niu
2006 Text Retrieval Conference  
TREC'06 genomics track was focusing on text mining and passage retrieval. WIM lab participated in this year's TREC genomics track.  ...  Our system consists of five parts: preprocessing, sentence generation, document retrieval, answer extraction and answer fusion.  ...  System Overview Our system is mainly contains 5 parts: preprocessing, sentence generation, document retrieval, text mining and answer fusion.  ... 
dblp:conf/trec/ZhengLHXZSN06 fatcat:z55iybyotbbttgjwddwfd2kdoy

Overview of the TREC 2006 ciQA task

Diane Kelly, Jimmy Lin
2007 SIGIR Forum  
Growing interest in interactive systems for answering complex questions lead to the development of the complex, interactive QA (ciQA) task, introduced for the first time at TREC 2006.  ...  Recent publications, interests of funding sponsors, and discussions at TREC and other conferences, indicate that the field of question answering is moving in two major directions: • A shift away from "  ...  Acknowledgements We thank Hoa Dang and Ellen Voorhees for helpful discussions and their hard work in making TREC possible.  ... 
doi:10.1145/1273221.1273231 fatcat:vojkrexx4nawrlrc6423pqmadu

EXAM: How to Evaluate Retrieve-and-Generate Systems for Users Who Do Not (Yet) Know What They Want

David P. Sander, Laura Dietz
2021 Biennial Conference on Design of Experimental Search & Information Retrieval Systems  
To be effective, such systems should be allowed to combine retrieval with language generation.  ...  pose for today's IR evaluation paradigms, we propose EXAM, an evaluation paradigm that uses held-out exam questions and an automated questionanswering system to evaluate how well generated responses can answer  ...  Within the TREC Complex Answer Retrieval track [4] , we aspire to retrieve-and-generate overview articles as found on Wikipedia.  ... 
dblp:conf/desires/SanderD21 fatcat:xzqey4opabbjjd4mudsgkwbfye

Automated Question Answering System for Community-Based Questions

Chanin Pithyaachariyakul, Anagha Kulkarni
We present our attempt at developing an efficient Question Answering system for both factoid and non-factoid questions from any domain.  ...  Empirical evaluation of our system using multiple datasets demonstrates that our system outperforms the best system from the TREC LiveQA tracks, while keeping the response time to under less than half  ...  , (iii) extracting short units of texts as candidate answers from the retrieved documents, and (iv) selecting the best answer using effective ranking algorithms (Wang et al. 2015) .  ... 
doi:10.1609/aaai.v32i1.12159 fatcat:5spijmsxrrabdenqin4zsljmha

Benchmark for Complex Answer Retrieval [article]

Federico Nanni, Bhaskar Mitra, Matt Magnusson, Laura Dietz
2017 arXiv   pre-print
The new TREC Complex Answer Retrieval (TREC CAR) track introduces a comprehensive dataset that targets this retrieval scenario.  ...  The goal is to offer future participants of this track an overview of some promising approaches to tackle this problem.  ...  report [2] , a new TREC track on Complex Answer Retrieval 1 (TREC CAR) for opendomain queries has been recently introduced [10] . e task and related dataset are based on the assumption that each Wikipedia  ... 
arXiv:1705.04803v1 fatcat:vgboyaprvzgvlevjs3xunlmhn4

The University of Sheffield's TREC 2006 Q&A Experiments

Mark A. Greenwood, Mark Stevenson, Robert J. Gaizauskas
2006 Text Retrieval Conference  
In this way we configured two main runs for the main task of the 2006 TREC QA evaluation 3 : shef06qal This run used QA-LaSIE to answer factoid and list questions using documents retrieved from AQUAINT  ...  TREC evaluations).  ... 
dblp:conf/trec/GreenwoodSG06 fatcat:dl3maq27hnc5jhuk77oqzlbt5a

LSTM vs. BM25 for Open-domain QA

Sosuke Kato, Riku Togashi, Hideyuki Maeda, Sumio Fujita, Tetsuya Sakai
2017 Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval - SIGIR '17  
Answers) questions and over 27.4 million answers for training, and the other based on BM25. Both systems use the same Q&A knowledge source for answer retrieval.  ...  However, most such studies rely on relatively small data sets, e.g., those extracted from the old TREC QA tracks.  ...  In the retrieval phase, given a question, we rank answers by euclidean distance by the same mechanism depicted in Figure 1. Figure 2 shows an overview of our demonstration system.  ... 
doi:10.1145/3077136.3084147 dblp:conf/sigir/KatoTMFS17 fatcat:g5t3wx4y2rfqrbm7osvg2qhhgm
« Previous Showing results 1 — 15 out of 2,380 results