Filters








1,845 Hits in 6.9 sec

Experiments on Cross-Language Information Retrieval Using Comparable Corpora of Chinese, Japanese, and Korean Languages [chapter]

Kazuaki Kishida, Kuang-hua Chen
2020 Evaluating Information Retrieval and Access Tasks  
NTCIR CLIR tasks have been built on the basis of test collections that incorporate such comparable corpora. We summarize the technical advances observed in these CLIR tasks at the end of the paper.  ...  Information access Research (NTCIR)-1 to NTCIR-6 evaluation cycles, which mainly focused on Chinese, Japanese, and Korean (CJK) languages.  ...  Additionally, in NTCIR-1 and -2, Toshihiko Nozue, Souichiro Hidaka, Hiroyuki Kato, and Masaharu Yoshioka also joined to organize the IR tasks.  ... 
doi:10.1007/978-981-15-5554-1_2 fatcat:x7e2tnp7gjekzlqzgkzaddu2mi

CLEF eHealth Evaluation Lab 2020 [chapter]

Hanna Suominen, Liadh Kelly, Lorraine Goeuriot, Martin Krallinger
2020 Lecture Notes in Computer Science  
CLEF ehealth 2019 evaluation lab Kelly, L.; Goeuriot, L.; Suominen, H.; Neves, M.; Kanoulas, E.; Spijker, R.; Azzopardi, L.; Li, D.; Jimmy; Palotti, J.; Zuccon, G.  ...  The CLEF eHealth 2019 evaluation lab is supported in part by the CLEF Initiative, Data61/CSIRO, a Google Faculty Research Award, and ARC DECRA grant DE180101579.  ...  We gratefully acknowledge the people involved in the CLEF eHealth labs as participants or organizers. We also acknowledge the many organizations that have supported CLEF eHealth labs since 2012.  ... 
doi:10.1007/978-3-030-45442-5_76 fatcat:ugsktnraazh4dl74jmyspc4wfu

CLEF eHealth 2019 Evaluation Lab [chapter]

Liadh Kelly, Lorraine Goeuriot, Hanna Suominen, Mariana Neves, Evangelos Kanoulas, Rene Spijker, Leif Azzopardi, Dan Li, Jimmy, João Palotti, Guido Zuccon
2019 Lecture Notes in Computer Science  
Since 2012 CLEF eHealth has focused on evaluation resource building efforts around the easing and support of patients, their next-ofkins, clinical staff, and health scientists in understanding, accessing  ...  Herein, we describe the CLEF eHealth evaluation series to-date and then present the 2019 tasks, evaluation methodology, and resources.  ...  The CLEF eHealth 2019 evaluation lab is supported in part by (in alphabetical order) the CLEF Initiative, and Data61/CSIRO.  ... 
doi:10.1007/978-3-030-15719-7_36 fatcat:jgvtoftlrjfkrnujgxxrdelag4

Evaluation over thousands of queries

Ben Carterette, Virgil Pavlu, Evangelos Kanoulas, Javed A. Aslam, James Allan
2008 Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '08  
There has been a great deal of recent work on evaluation over much smaller judgment sets: how to select the best set of documents to judge and how to estimate evaluation measures when few judgments are  ...  We present results of the track, along with deeper analysis: investigating tradeoffs between the number of queries and number of judgments shows that, up to a point, evaluation over more queries with fewer  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are the authors' and do not necessarily reflect those of the sponsors.  ... 
doi:10.1145/1390334.1390445 dblp:conf/sigir/CarterettePKAA08 fatcat:odo3xgvydrcpdpzg7wjmpwhylm

Development and evaluation of a geographic information retrieval system using fine grained toponyms

Ross Purves, Damien Palacio, Curdin Derungs
2015 Journal of Spatial Information Science  
We explore the effectiveness of three systems (a text baseline, spatial query expansion, and a full GIR system utilizing both text and spatial indexes) at retrieving documents from a corpus describing  ...  To allow evaluation, we use user generated content (UGC) in the form of metadata associated with individual articles to build a test collection of queries and judgments.  ...  We also thank three anonymous reviewers for their constructive suggestions which helped us improve and clarify this paper. Elise Acheson is also thanked for her helpful comments on the manuscript.  ... 
doi:10.5311/josis.2015.11.193 fatcat:oybs5w7bdzaa3odtmkuwokdc7a

Evaluation in discussion sessions of conference presentations: theoretical foundations for a multimodal analysis

Mercedes Querol-Julián, Inmaculada Fortanet-Gómez
2016 Kalbotyra  
padeda sustiprinti ir atvirai perteikti kalbėtojo poziciją.  ...  Jo pagrindą sudaro tekstynų lingvistikos, žanro ir pokalbio analizės, sisteminės funkcinės lingvistikos, pragmatikos ir multimodalinės diskurso analizės principai.  ...  We would like to thank the presenters who gave their permission to be recorded and to use examples taken from these recordings in our research and publications.  ... 
doi:10.15388/klbt.2014.7676 fatcat:xfwoczjajnd45bhnhja2qzsgxy

Increasing evaluation sensitivity to diversity

Peter B. Golbus, Javed A. Aslam, Charles L. A. Clarke
2013 Information retrieval (Boston)  
Ideally, diversity evaluation measures would distinguish between systems by the amount of diversity in the ranked lists they produce.  ...  This is especially true in the context of Web search. To account for this, much recent research has focused on creating systems that produce diverse ranked lists.  ...  The diversity difficulties of the TREC 2010 and 2011 corpora are analyzed in Sect. 4.2. Definition Imagine a collection and a topic with ten subtopics and 1,009 relevant documents.  ... 
doi:10.1007/s10791-012-9218-8 fatcat:5a3p7padabfmdp55ua2cpmz5sa

Methods for Evaluating Interactive Information Retrieval Systems with Users

Diane Kelly
2007 Foundations and Trends in Information Retrieval  
systems, 2 Kantor's [161] review of feedback and its evaluation in IR, Rorvig's [223] review of psychometric measurement in IR, Harter and Hert's [123] review of IR system evaluation, and Wang's [290]  ...  This article (1) provides historical background on the development of user-centered approaches to the evaluation of interactive information retrieval systems; (2) describes the major components of interactive  ...  ; Fabrizio Sebastiani and Jamie Callan for their great patience and encouragement; and three anonymous reviewers for their careful and thoughtful comments.  ... 
doi:10.1561/1500000012 fatcat:w2ek674zgfbhlnhorrklwmbuyy

The challenging task of summary evaluation: an overview

Elena Lloret, Laura Plaza, Ahmet Aker
2017 Language Resources and Evaluation  
Evaluation is crucial in the research and development of automatic summarization applications, in order to determine the appropriateness of a summary based on different criteria, such as the content it  ...  In this article, a critical and historical analysis of evaluation metrics, methods, and datasets for automatic summarization systems is presented, where the strengths and weaknesses of evaluation efforts  ...  Acknowledgements This research is partially funded by the European Commission under the Seventh (FP7 -2007(FP7 - -2013 Framework Programme for Research and Technological Development through the SAM (FP7  ... 
doi:10.1007/s10579-017-9399-2 fatcat:tduxzlv2hfbfvd6evzoxn5xibu

Successful approaches in the TREC video retrieval evaluations

Alexander G. Hauptmann, Michael G. Christel
2004 Proceedings of the 12th annual ACM international conference on Multimedia - MULTIMEDIA '04  
The search evaluations are grouped into interactive (with a human in the loop) and noninteractive (where the human merely enters the query into the system) submissions.  ...  This paper reviews successful approaches in evaluations of video retrieval over the last three years.  ...  The topics are defined by NIST to reflect many of the sorts of queries real users pose, based on query logs against video corpora like the BBC Archives and other empirical data [6, 9] .  ... 
doi:10.1145/1027527.1027681 dblp:conf/mm/HauptmannC04 fatcat:45xpbxgavbcw3c6w6yx42ssla4

Principles for robust evaluation infrastructure

Justin Zobel, William Webber, Mark Sanderson, Alistair Moffat
2011 Proceedings of the 2011 workshop on Data infrastructurEs for supporting information retrieval evaluation - DESIRE '11  
The standard "Cranfield" approach to the evaluation of information retrieval systems has been used and refined for nearly fifty years, and has been a key element in the development of large-scale retrieval  ...  In this position statement we briefly review some aspects of evaluation and, based on our research and observations over the last decade, outline some principles on which we believe new infrastructure  ...  The TREC (and similar) run archives have underpinned much of our own research on system measurement and allow, for example, systems to be re-evaluated in the light of new measurement techniques.  ... 
doi:10.1145/2064227.2064247 fatcat:pilg7zgq4zc4xbgt7sqfdjfwbq

System evaluation of archival description and access

Junte Zhang
2012 SIGIR Forum  
The system evaluation of EAD finding aids is an IR evaluation research methodology to gauge the IR effectiveness.  ...  A test collection is a key component in IR evaluation.  ...  We find that the transition from the top to the introductory information requires most time. Next, we focus on the characteristics of the already-visited group in Table 5.14.  ... 
doi:10.1145/2093346.2093367 fatcat:uoqzc5kavva2hn3j7kxzybhlwu

An overview of semantic search evaluation initiatives

Khadija M. Elbedweihy, Stuart N. Wrigley, Paul Clough, Fabio Ciravegna
2015 Journal of Web Semantics  
However, despite the wealth of experience accumulated from evaluating Information Retrieval (IR) systems, the evaluation of Semantic Web search systems has largely been developed in isolation from mainstream  ...  In this paper, we review existing approaches to IR evaluation and analyse evaluation activities for Semantic Web search systems.  ...  : the effectiveness of an IR system is measured on the basis of how well the system retrieves relevant items in response to given search requests.  ... 
doi:10.1016/j.websem.2014.10.001 fatcat:lnfhakcuoral5hyles4n52exmu

An Overview of Semantic Search Evaluation Initiatives

Khadija M. Elbedweihy, Stuart N. Wrigley, Paul Clough, Fabio Ciravegna
2015 Social Science Research Network  
However, despite the wealth of experience accumulated from evaluating Information Retrieval (IR) systems, the evaluation of Semantic Web search systems has largely been developed in isolation from mainstream  ...  In this paper, we review existing approaches to IR evaluation and analyse evaluation activities for Semantic Web search systems.  ...  : the effectiveness of an IR system is measured on the basis of how well the system retrieves relevant items in response to given search requests.  ... 
doi:10.2139/ssrn.3199177 fatcat:rspnzjpj5zc5tpyi5lsth34hdq

Overview of the CLEF eHealth Evaluation Lab 2016 [chapter]

Liadh Kelly, Lorraine Goeuriot, Hanna Suominen, Aurélie Névéol, João Palotti, Guido Zuccon
2016 Lecture Notes in Computer Science  
In this paper, we provide an overview of the sixth annual edition of the CLEF eHealth evaluation lab.  ...  CLEF eHealth 2018 continues our evaluation resource building efforts around the easing and support of patients, their next-of-kins, clinical staff, and health scientists in understanding, accessing, and  ...  The CLEF eHealth 2018 evaluation lab has been supported in part by (in alphabetical order) the ANU, the CLEF Initiative, the Data61/CSIRO, and the French National Research Agency (ANR), under grant CABeRneT  ... 
doi:10.1007/978-3-319-44564-9_24 fatcat:qenzcwmuqzfl7ngtlkou4osynq
« Previous Showing results 1 — 15 out of 1,845 results