Filters








143 Hits in 4.6 sec

QUALIFIER In TREC-12 QA Main Task

Hui Yang, Hang Cui, Mstislav Maslennikov, Long Qiu, Min-Yen Kan, Tat-Seng Chua
2003 Text Retrieval Conference  
This paper describes a question answering system and its various modules to solve definition, factoid and list questions defined in the TREC12 Main task.  ...  In particular, we tackle the factoid QA task by Event-based Question Answering. Each QA event comprises of elements describing different facets like time, location, object, action etc.  ...  This work was supported in part by the A*Star and NUS Joint-Lab funding.  ... 
dblp:conf/trec/YangCMQKC03 fatcat:ae3lvbqlnfcdpajzdxtbk2sxsy

Structured use of external knowledge for event-based open domain question answering

Hui Yang, Tat-Seng Chua, Shuguang Wang, Chun-Keat Koh
2003 Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval - SIGIR '03  
In order to overcome this problem, our earlier work integrates external knowledge extracted from the Web and WordNet to perform Event-based QA on the TREC-11 task.  ...  One of the major problems in question answering (QA) is that the queries are either too brief or often do not contain most relevant terms in the target corpus.  ...  To address this problem, a Question-Answering (QA) task was initiated in TREC conference series [ 18] .  ... 
doi:10.1145/860442.860444 fatcat:wxo65ad2qbej3lrv2jhbib4hay

Structured use of external knowledge for event-based open domain question answering

Hui Yang, Tat-Seng Chua, Shuguang Wang, Chun-Keat Koh
2003 Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval - SIGIR '03  
In order to overcome this problem, our earlier work integrates external knowledge extracted from the Web and WordNet to perform Event-based QA on the TREC-11 task.  ...  One of the major problems in question answering (QA) is that the queries are either too brief or often do not contain most relevant terms in the target corpus.  ...  To address this problem, a Question-Answering (QA) task was initiated in TREC conference series [ 18] .  ... 
doi:10.1145/860435.860444 dblp:conf/sigir/YangCWK03 fatcat:tftu6egg6rbkll4wgljph2oqge

Use of Metadata for Question Answering and Novelty Tasks

Kenneth C. Litkowski
2003 Text Retrieval Conference  
For the QA track, we submitted one run and our overall main task score was 0.075, with scores of 0.070 for factoid questions, 0.000 for list questions, and 0.160 for definition questions.  ...  This core technology was then extended to participate in the novelty task.  ...  TREC 2003 QA Results We submitted one run for the main QA task and two runs for the passage task.  ... 
dblp:conf/trec/Litkowski03 fatcat:zh26q22hqbbllg7widh3n6rfki

Exploring Document Content with XML to Answer Questions

Kenneth C. Litkowski
2005 Text Retrieval Conference  
CL Research participated in the question answering track in TREC 2004, submitting runs for the main task, the document relevance task, and the relationship task.  ...  Participants in the main task were also required to participate in the document-ranking task by submitting up to 1000 documents, ordered by score.  ...  In this conceptualization, it may be desirable that the main task should be the automatic development and completion of templates pertaining to a target of interest.  ... 
dblp:conf/trec/Litkowski05 fatcat:we3oist5czfg5mq2eca6crul4e

Compositional question answering: A divide and conquer approach

Hyo-Jung Oh, Ki-Youn Sung, Myung-Gil Jang, Sung Hyon Myaeng
2011 Information Processing & Management  
so that they can be answered directly using existing QA capabilities.  ...  Since individual answers are composed to generate the final answer, we call this process as compositional QA.  ...  The latest QA track in TREC, 2006 (Dang & Lin, 2007a and TREC, 2007 (Dang, Lin, & Kelly, 2008 contained a complex interactive QA (ciQA) task, a blend of the relationship task and the HARD track that  ... 
doi:10.1016/j.ipm.2010.03.011 fatcat:2v6q2vivnnadxmvfdjj7dkgwky

The University of Sheffield TREC 2002 Q&A System

Mark A. Greenwood, Ian Roberts, Robert J. Gaizauskas
2002 Text Retrieval Conference  
qvar and eY in WordNet [12] .  ...  Conclusions and Future Work At its core, Sheffield's entry in this year's QA track remains the same as our TREC-9 system in 2000 [15] .  ... 
dblp:conf/trec/GreenwoodRG02 fatcat:ygfu6dxmizci7llpg3ruvkrhni

Using Semantic Roles to Improve Question Answering

Dan Shen, Mirella Lapata
2007 Conference on Empirical Methods in Natural Language Processing  
Experimental results on the TREC datasets demonstrate improvements over state-of-the-art models.  ...  We introduce a general framework for answer extraction which exploits semantic role annotations in the FrameNet paradigm.  ...  As a byproduct of our main investigation we also examine the issue of FrameNet coverage and show how much it impacts performance in a TREC-style question answering setting.  ... 
dblp:conf/emnlp/ShenL07 fatcat:vhloigrnibbppeq4vcjyyagyxm

Joint Models for Answer Verification in Question Answering Systems [article]

Zeyu Zhang, Thuy Vu, Alessandro Moschitti
2021 arXiv   pre-print
We tested our models on WikiQA, TREC-QA, and a real-world dataset. The results show that our models obtain the new state of the art in AS2.  ...  paper studies joint models for selecting correct answer sentences among the top k provided by answer sentence selection (AS2) modules, which are core components of retrieval-based Question Answering (QA  ...  ., MAP values of 92.80% and 94.88, on WikiQA and TREC-QA, respectively. MASR improves ASR by 2% on WQA, since this contains enough data to train the ASR representations jointly.  ... 
arXiv:2107.04217v1 fatcat:z3vvruhxfzcqnbqgpu7u6f4p2e

Using Syntactic and Semantic Relation Analysis in Question Answering

Renxu Sun, Jing Jiang, Yee Fan Tan, Hang Cui, Tat-Seng Chua, Min-Yen Kan
2005 Text Retrieval Conference  
In handling topics with qualifiers, for instance, "skier Alberto Tomba", we rely on the Web to separate the qualifiers from the main topic words, e.g., "Alberto Tomba" in the above example.  ...  This set of patterns includes the subset of patterns we used for TREC-12 that are used in TREC-13.  ... 
dblp:conf/trec/SunJTCCK05 fatcat:ksuzci6uezfqpff4veyikjamfe

A domain-independent approach to finding related entities

Olga Vechtomova, Stephen E. Robertson
2012 Information Processing & Management  
The evaluation was conducted on the Related Entity Finding task of the Entity Track of TREC 2010, as well as the QA list questions from TREC 2005 and 2006.  ...  Evaluation results demonstrate that the proposed methods are effective in finding related entities.  ...  For instance, the Question Answering (QA) track of the Text Retrieval Conference (TREC) (Dang et al., 2007 included in the main task the so-called "list" questions, where the correct response for a query  ... 
doi:10.1016/j.ipm.2011.12.003 fatcat:f7hxlt36e5gw3henruj2ybyuyy

Question Answering: CNLP at the TREC 2002 Question Answering Track

Anne Diekema, Jiangping Chen, Nancy J. McCracken, Necati Ercan Ozgencil, Mary D. Taffet, Özgür Yilmazel, Elizabeth D. Liddy
2002 Text Retrieval Conference  
This paper describes the retrieval experiments for the main task and list task of the TREC-2002 question-answering track.  ...  The question answering system described automatically finds answers to questions in a large document collection.  ...  Results We submitted three runs for the TREC-2002 QA track: one run for the main task and two runs for the list task.  ... 
dblp:conf/trec/DiekemaCMOTYL02 fatcat:3hpy7pacgnbifkalpt5yswhepq

Emerging Trends: SOTA-Chasing

Kenneth Ward Church, Valia Kordoni
2022 Natural Language Engineering  
SOTA-chasing may be similar to the replication crisis in the scientific literature.  ...  AbstractMany papers are chasing state-of-the-art (SOTA) numbers, and more will do so in the future. SOTA-chasing comes with many costs.  ...  other things, the TREC QA tasks are not very representative of the Jeopardy task.  ... 
doi:10.1017/s1351324922000043 fatcat:ngmgsdbnd5dc7aleza53dwbiye

TREC-COVID: Rationale and Structure of an Information Retrieval Shared Task for COVID-19

Kirk Roberts, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, Kyle Lo, Ian Soboroff, Ellen Voorhees, Lucy Lu Wang, William R Hersh
2020 JAMIA Journal of the American Medical Informatics Association  
TREC-COVID is an information retrieval (IR) shared task initiated to support clinicians and clinical research during the COVID-19 pandemic.  ...  TREC-COVID differs from traditional IR shared task evaluations with special considerations for the expected users, IR modality considerations, topic development, participant requirements, assessment process  ...  Contributions The authors are organizers of TREC-COVID. KER initially drafted the manuscript. All authors reviewed and approved the manuscript.  ... 
doi:10.1093/jamia/ocaa091 pmid:32365190 pmcid:PMC7239098 fatcat:rhqp7cvirrgvpfcyz7zji4z67u

The Alyssa System at TREC QA 2007: Do We Need Blog06?

Dan Shen, Michael Wiegand, Andreas Merkel, Stefan Kazalski, Sabine Hunsicker, Jochen L. Leidner, Dietrich Klakow
2007 Text Retrieval Conference  
We describe the participation of the Saarland University LSV group in the DARPA/NIST TREC 2007 Q&A track with the Alyssa system, using an approach that combines cascaded language-model based information  ...  This is an increase of 12%.  ...  It slightly outperformed lsv2007b by answering 6 more questions. lsv2007c placed fourth in the overall evaluation, third in the factoid task and even second in the definition task.  ... 
dblp:conf/trec/ShenWMKHLK07 fatcat:a26qzivj2jd33gtxz22m577dry
« Previous Showing results 1 — 15 out of 143 results