Evaluating Passage Retrieval Approaches for Question Answering [chapter]

Ian Roberts, Robert Gaizauskas
2004 Lecture Notes in Computer Science  
Automatic open domain question answering (QA) has been the focus of much recent research, stimulated by the introduction of a QA track in TREC in 1999. Many QA systems have been developed and most follow the same broad pattern of operation: first an information retrieval (IR) system, often passage-based, is used to find passages from a large document collection which are likely to contain answers, and then these passages are analysed in detail to extract answers from them. Most research to date
more » ... has focused on this second stage, with relatively little detailed investigation into aspects of IR component performance which impact on overall QA system performance. In this paper, we (a) introduce two new metrics, coverage and answer redundancy, which we believe capture aspects of IR performance specifically relevant to QA more appropriately than do the traditional recall and precision measures, and (b) demonstrate their use in evaluating a variety of passage retrieval approaches using questions from TREC-9 and TREC 2001.
doi:10.1007/978-3-540-24752-4_6 fatcat:l2de4l73lzhidmenemnwcx2buq