A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Pooling-based continuous evaluation of information retrieval systems
2015
Information retrieval (Boston)
The dominant approach to evaluate the effectiveness of information retrieval (IR) systems is by means of reusable test collections built following the Cranfield paradigm. ...
The goal of this work is to study the behavior of standard IR metrics, IR system ranking, and of several pooling techniques in a continuous evaluation context by comparing continuous and non-continuous ...
First of all, the goal of a continuous evaluation campaign is to compare information retrieval systems based on relevance judgements made by humans. ...
doi:10.1007/s10791-015-9266-y
fatcat:2qovjezh4vd5fcxjkoi3aiwy6a
Continuous improvement of knowledge management systems using Six Sigma methodology
2013
Robotics and Computer-Integrated Manufacturing
Knowledge retrieval is a decisive part of the performance of a knowledge management system. In order to enhance retrieval accuracy, an effective performance evaluation mechanism is necessary. ...
In order to improve the performance of knowledge retrieval, this paper proposes an evaluation mechanism using Six Sigma methodology to help developers continuously control the knowledge retrieval process ...
Traditional information retrieval evaluation is usually laboratory-based evaluation and relies on a test collection, evaluation measures and a comparative evaluation between algorithms, models or systems ...
doi:10.1016/j.rcim.2012.04.018
fatcat:vpe5egyfcvgjzef3iyujglryuy
On real-time ad-hoc retrieval evaluation
2012
Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval - SIGIR '12
Lab-based evaluations typically assess the quality of a retrieval system with respect to its ability to retrieve documents that are relevant to the information need of an end user. ...
The proposed task can indeed assess the quality of a retrieval system with regard to retrieving both relevant and timely information. ...
INTRODUCTION Lab-based evaluations typically assess the quality of a retrieval system with respect to its ability to retrieve documents that are relevant to the information need of an end user. ...
doi:10.1145/2348283.2348498
dblp:conf/sigir/RobertsonK12a
fatcat:spq5efactng27my4dczsl3u5hm
Building Better Search Engines by Measuring Search Quality
2014
IT Professional Magazine
Search engines help users locate particular information within large stores of content developed for human consumption. ...
For example, users expect web search engines to direct searchers to web sites based on the content of the site rather than the site address, and video search engines someday to return video clips based ...
A pool is the union of the top X documents retrieved by each of the participating systems' searches for a given topic. ...
doi:10.1109/mitp.2013.105
fatcat:hk3zocjbxjawhfuye4k7gkcvqq
Building High-Quality Datasets for Information Retrieval Evaluation at a Reduced Cost
2019
Proceedings (MDPI)
Information Retrieval is not any more exclusively about document ranking. Continuously new tasks are proposed on this and sibling fields. ...
With this proliferation of tasks, it becomes crucial to have a cheap way of constructing test collections to evaluate the new developments. ...
Conflicts of Interest: The authors declare no conflict of interest. ...
doi:10.3390/proceedings2019021033
fatcat:fnpzzammcjdqtmqdfe32fzpn2e
Report on INEX 2013
2013
SIGIR Forum
Informativeness measure is based on lexical overlap between a pool of relevant passages (RPs) and participant summaries. ...
For INEX 2013, we explored two different retrieval tasks that continue from INEX 2012: • The classic Ad-hoc Retrieval task investigates informational queries to be answered mainly by the textual contents ...
doi:10.1145/2568388.2568393
fatcat:2igtwoe23fedloome5yrzms6fm
Exploiting Pooling Methods for Building Datasets for Novel Tasks
2019
BCS-IRSG Symposium on Future Directions in Information Access
Information Retrieval is not any more exclusively about document ranking. Continuously new tasks are proposed on this and sibling fields. ...
We aim to achieve flexibility in terms of adding new retrieval models and pooling strategies to the system. We want the platform also to be useful to evaluate the obtained collections. ...
Introduction In Information Retrieval, under the Cranfield paradigm, test collections are the most widely used method for evaluating the effectiveness of new systems [15] . ...
dblp:conf/fdia/Otero19
fatcat:5gjnyaehtnb5naoik54nxqhaaa
Effects of Usage-Based Feedback on Video Retrieval
2011
ACM Transactions on Information Systems
a usage based information pool, to name a few. ...
An evaluation strategy is proposed based on simulated user actions, which enables the evaluation of our recommendation strategies over a usage information pool obtained from 24 users performing four different ...
SIMULATION-BASED INTERACTIVE RETRIEVAL EVALUATION As explained earlier, our evaluation approach requires usage information from past users, collected in the implicit pool graph, and the trace of the current ...
doi:10.1145/1961209.1961214
fatcat:lshdfvddxvac7mjcnymjsby7dq
Overview of TREC 2007
2007
Text Retrieval Conference
My thanks to the coordinators who make the variety of different tasks addressed in TREC possible. ...
Acknowledgements The track summaries in section 3 are based on the track overview papers authored by the track coordinators. ...
based on the document sets retrieved. ...
dblp:conf/trec/Voorhees07
fatcat:amjy53u3rfby7nt5ay4cq2hvty
Cross-Language Evaluation Forum: Objectives, Results, Achievements
2004
Information retrieval (Boston)
We also make proposals for future directions in system evaluation aimed at meeting emerging needs. ...
The Cross-Language Evaluation Forum (CLEF) is now in its fourth year of activity. ...
Without their generous support the CLEF evaluation activity would be impossible. ...
doi:10.1023/b:inrt.0000009438.69013.fa
fatcat:pu7xyyfwkzhjdh7wptsn3qsuce
Report on INEX 2011
2012
SIGIR Forum
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results ...
This paper reports on the INEX 2011 evaluation campaign, which consisted of a five active tracks: Books and Social Search, Data Centric, Question Answering, Relevance Feedback, and Snippet Retrieval. ...
Informativeness evaluation has been performed by organizers on a pool of 50 topics. For each of these topics, all passages submitted have been evaluated. ...
doi:10.1145/2215676.2215679
fatcat:djgjh5ylfjf75alaj7xvfqsuze
A Combined Method of Naïve-Bayes and Pooling Strategy for Building Test Collection for Arabic/English Information Retrieval
2021
International Journal of Computing and Digital Systems
In this paper, we examine the feasibility of building Information retrieval test collections based on two combined methods, the pooling strategy and the Naïve-Bayes machine-learning algorithm. ...
The paper empirically shows that the use of the machine-learning algorithms combined to the pooling strategy serves to build information retrieval collections efficiently and more quickly. ...
use of statistical experimentation of information retrieval systems. ...
doi:10.12785/ijcds/100161
fatcat:sd5cwcsdqng4fchpj3vgyfqtcq
Evaluating performance of biomedical image retrieval systems—An overview of the medical image retrieval task at ImageCLEF 2004–2013
2015
Computerized Medical Imaging and Graphics
(Evaluation(Forum(((CLEF),(a(challenge(evaluation(for( information(retrieval(from(diverse(languages ([24]. ...
( the( content( of( images( in( the( manner( that( information( retrieval( and( extraction( systems( have( been(able(to(do(so(with(text([4, 38] . ...
( forum( and( framework( for( evaluating( the( state( of( the( art( in( biomedical( image(
information(retrieval([4,18,24,26,28,29,30,41].((
(
Motivation:**
An( important( goal( is( to( develop( systems ...
doi:10.1016/j.compmedimag.2014.03.004
pmid:24746250
pmcid:PMC4177510
fatcat:xuck5guwojexviqczz4keargjm
Variations in relevance assessments and the measurement of retrieval effectiveness
1996
Journal of the American Society for Information Science
We can no longer rest the evaluation of information retrieval systems on the assumption that such variations do not significantly affect the measurement of information retrieval performance. ...
, user-based relevance, point to a single conclusion. ...
Measures of retrieval performance have continued to be based on "relevance" since the first retrieval experiments of the 1950s. ...
doi:10.1002/(sici)1097-4571(199601)47:1<37::aid-asi4>3.0.co;2-3
fatcat:cmjlyyyg65cvvlv6kpnaxy6lrq
Variations in relevance assessments and the measurement of retrieval effectiveness
1996
Journal of the American Society for Information Science
We can no longer rest the evaluation of information retrieval systems on the assumption that such variations do not significantly affect the measurement of information retrieval performance. ...
, user-based relevance, point to a single conclusion. ...
Measures of retrieval performance have continued to be based on "relevance" since the first retrieval experiments of the 1950s. ...
doi:10.1002/(sici)1097-4571(199601)47:1<37::aid-asi4>3.3.co;2-i
fatcat:cckshv2tw5arbp2wqdzdo7kyea
« Previous
Showing results 1 — 15 out of 139,111 results