A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Evaluation of Information Access Technologies at the NTCIR Workshop
[chapter]
2004
Lecture Notes in Computer Science
The aims of the NTCIR project are: 1. to encourage research in information access technologies by providing large-scale test collections reusable for experiments, 2. to provide a forum for research groups ...
for constructing large-scale reusable test collections. ...
Test Collections The test collections constructed for the NTCIR Workshops are listed in Table 2 . ...
doi:10.1007/978-3-540-30222-3_4
fatcat:kox5oyahanh2hlhybiwfct6rce
CLIR System Evaluation at the Second NTCIR Workshop
[chapter]
2002
Lecture Notes in Computer Science
large-scale test collections and a forum for researchers. ...
A brief history, tasks, participants, test collections, CLIR evaluation at the workshops, and plan for the next workshop are described in this paper. ...
providing large-scale test collections and a forum for researchers. ...
doi:10.1007/3-540-45691-0_35
fatcat:e7hs7aldgncv3mqr74j3zolbqy
Test Collection Based Evaluation of Information Retrieval Systems
2010
Foundations and Trends in Information Retrieval
Across the nearly 60 years since that work started, use of test collections is a de facto standard of evaluation. ...
At its core, the modern-day test collection is little different from the structures that the pioneering researchers in the 1950s and 1960s conceived of. ...
Conclusion In this section, the initial development of large-scale test collections using a pooling approach for building qrels was described and the measures used to assess effectiveness were described ...
doi:10.1561/1500000009
fatcat:qdacqkqj25eojkpchctdzvrt2e
Cross-Language Retrieval for the CLEF Collections — Comparing Multiple Methods of Retrieval
[chapter]
2001
Lecture Notes in Computer Science
To help enrich the CLEF relevance set for future training, we prepared a manual reformulation of the original German queries which a c hieved excellent performance, more than 110% better than average of ...
Combining all techniques using simple data fusion produced the best results. ...
For future research w e are creating a Russian version of the GIRT queries to test strategies for Russian-German retrieva l v i a a m ultilingual thesaurus. ...
doi:10.1007/3-540-44645-1_11
fatcat:32sjasfcnvbaxjsrgqethecpre
An evaluation of the Web retrieval task at the third NTCIR workshop
2004
SIGIR Forum
This paper gives an overview of the Web Retrieval Task that was conducted from 2001 to 2002 at the Third NTCIR Workshop. ...
In the Web Retrieval Task, we attempted to assess the retrieval effectiveness of Web search engine systems using a common data set, and built a re-usable test collection suitable for evaluating Web search ...
We greatly appreciate the efforts of all the participants of the Web Retrieval Task at the Third NTCIR Workshop. ...
doi:10.1145/986278.986285
fatcat:z6rg3kvttngyze7g2slffmuepa
The Future of Information Retrieval Evaluation
[chapter]
2020
Evaluating Information Retrieval and Access Tasks
Looking back over the storied history of NTCIR that is recounted in this volume, we can see many impactful contributions. ...
As we look at the future, we might then ask what points of continuity and change we might reasonably anticipate. Beginning that discussion is the focus of this chapter. ...
Ian Soboroff, Ellen Voorhees, and Charles Wayne for discussions over the years that have helped to shape his thinking on this topic. ...
doi:10.1007/978-981-15-5554-1_14
fatcat:7lkxdwqndremtmqi24ww3cssca
Building Better Search Engines by Measuring Search Quality
2014
IT Professional Magazine
Search engines help users locate particular information within large stores of content developed for human consumption. ...
The NIST Text REtrieval Conference (TREC) project has been instrumental in creating the necessary infrastructure to measure the quality of search results for more than twenty years, and has thus helped ...
The first test collection resulted from a series of experiments regarding indexing languages at the Cranfield College of Aeronautics in the1960s [1] . ...
doi:10.1109/mitp.2013.105
fatcat:hk3zocjbxjawhfuye4k7gkcvqq
Patent Retrieval: A Literature Review
[article]
2017
arXiv
pre-print
Patent Retrieval (PR) is considered the pillar of almost all patent analysis tasks. ...
With the ever increasing number of filed patent applications every year, the need for effective and efficient systems for managing such tremendous amounts of data becomes inevitably important. ...
NTCIR has been organizing a series of workshops providing test collections to researchers for evaluating their methodologies on multiple CLIR tasks [8] . ...
arXiv:1701.00324v1
fatcat:u5w55z4cj5cwbii5s55rjvegvi
Harnessing the Scientific Data Produced by the Experimental Evaluation Search Engines and Information Access Systems
2011
Procedia Computer Science
Test Collection for IR Systems (NTCIR) 3 in Japan and Asia. ...
In this context, large-scale evaluation initiatives provide a significant contribution to the advancement in research and state-of-the-art, industrial innovation in a given domain, and building of strong ...
Acknowledgements The authors would like to thank Emanuele Di Buccio for his help in the preparation of the final version of this paper. ...
doi:10.1016/j.procs.2011.04.078
fatcat:fyraz62yy5bbdflzcvkykmquli
Performance Comparison of Ad-Hoc Retrieval Models over Full-Text vs. Titles of Documents
[chapter]
2018
Lecture Notes in Computer Science
On the other hand, conducting a search based on titles alone has strong limitations. Titles are short and therefore may not contain enough information to yield satisfactory search results. ...
The difference between the average evaluation results of the best title-based retrieval models is only % less than those of the best full-text-based retrieval models. ...
For this purpose, we utilize five datasets, out of which three are obtained from digital libraries: PubMed, Econbiz and IREON, and two standard test collections [ ]: NTCIR-and TREC Disks & . ...
doi:10.1007/978-3-030-04257-8_30
fatcat:wwxzjk57urcgrakcnwn4iecu7u
An Overview of Evaluation Campaigns in Multimedia Retrieval
[chapter]
2010
ImageCLEF
This chapter sets the scene for the book by describing the purpose of system and user-centred evaluation, the purpose of test collections, the role of evaluation campaigns such as TREC and CLEF, our motivations ...
In this chapter we discuss evaluation of Information Retrieval (IR) systems and in particular ImageCLEF, a large-scale evaluation campaign that has produced several publicly-accessible resources required ...
Andrews University Library in Scotland for providing us access to the historic set of photographs for the first ImageCLEF evaluation campaign. ...
doi:10.1007/978-3-642-15181-1_27
fatcat:nnglqvlfw5a6fceju5vrwyswim
Seven Years of Image Retrieval Evaluation
[chapter]
2010
ImageCLEF
This chapter sets the scene for the book by describing the purpose of system and user-centred evaluation, the purpose of test collections, the role of evaluation campaigns such as TREC and CLEF, our motivations ...
In this chapter we discuss evaluation of Information Retrieval (IR) systems and in particular ImageCLEF, a large-scale evaluation campaign that has produced several publicly-accessible resources required ...
Andrews University Library in Scotland for providing us access to the historic set of photographs for the first ImageCLEF evaluation campaign. ...
doi:10.1007/978-3-642-15181-1_1
fatcat:yuvmyscbufg5lclr36z4pughm4
Cross-Language Evaluation Forum: Objectives, Results, Achievements
2004
Information retrieval (Boston)
We summarize the main lessons learned during this period, outline the state-of-the-art of the research reported in the CLEF experiments and discuss the contribution that this initiative has made to research ...
The Cross-Language Evaluation Forum (CLEF) is now in its fourth year of activity. ...
We are particularly grateful to Donna Harman and Ellen Voorhees from NIST, organizers of TREC, for their tireless support. ...
doi:10.1023/b:inrt.0000009438.69013.fa
fatcat:pu7xyyfwkzhjdh7wptsn3qsuce
Experiments in Lifelog Organisation and Retrieval at NTCIR
[chapter]
2020
Evaluating Information Retrieval and Access Tasks
In this chapter, a motivation is given for the Lifelog task and a review of progress since NTCIR-12 is presented. ...
The Lifelog task at NTCIR was a comparative benchmarking exercise with the aim of encouraging research into the organisation and retrieval of data from multimodal lifelogs. ...
Acknowledgements Many thanks to the editors and all authors of this book, and to the present and past organisers and participants of the NTCIR tasks. ...
doi:10.1007/978-981-15-5554-1_13
fatcat:fhsvm3teibelblxn2qapgfbhue
CLEF-IP 2009: Retrieval Experiments in the Intellectual Property Domain
[chapter]
2010
Lecture Notes in Computer Science
A large-scale test collection for evaluation purposes was created by exploiting patent citations. ...
The purpose of the track was twofold: to encourage and facilitate research in the area of patent retrieval by providing a large clean data set for experimentation; to create a large test collection of ...
Thanks to Evangelos Kanoulas and Emine Yilmaz for interesting discussions on creating large test collections. ...
doi:10.1007/978-3-642-15754-7_47
fatcat:qjcpy64xfzbhbjwhmqkoiduf2a
« Previous
Showing results 1 — 15 out of 98 results