6 Hits in 2.3 sec

SICS at iCLEF 2002: Cross-Language Relevance Assessment and Task Context [chapter]

Jussi Karlgren, Preben Hansen
2003 Lecture Notes in Computer Science  
The results are related to task and context and an enhanced methodology for performing context-sensitive studies is reported.  ...  Results show that relevance assessment in a foreign language takes more time and is prone to errors compared to assessment in the reader's first language.  ...  Acknowledgments We thank Tidningarnas Telegrambyrå AB, Stockholm, for providing us with the Swedish text collection, and Heikki Keskustalo, University of Tampere, Johan Carlberger and Hercules Dalianis  ... 
doi:10.1007/978-3-540-45237-9_34 fatcat:l7vrtknft5fevpn5wkb6k3zxca

The CLEF 2002 Interactive Track [chapter]

Julio Gonzalo, Douglas W. Oard
2003 Lecture Notes in Computer Science  
Participating teams each compared two systems, both supporting a full retrieval task where users had to select relevant documents given a (native language) topic and a (foreign language) document collection  ...  In the CLEF 2002 Interactive Track, research groups interested in the design of systems to support interactive Cross-Language Retrieval used a shared experiment design to explore aspects of that question  ...  Braschler created the assessment pools; Ellen Voorhees, Michael Kluck, Eija Airio and Jorun Kugelberg provided native relevance assessments; and Jianqiang Wang and Dina Demner provided Systran translations  ... 
doi:10.1007/978-3-540-45237-9_33 fatcat:eldriija5fe73jyuhm3hzinmc4

What Happened in CLEF 2006: Introduction to the Working Notes

Carol Peters
2006 Conference and Labs of the Evaluation Forum  
This is done through the organisation of annual evaluation campaigns in which a series of tracks designed to test different aspects of mono-and cross-language information retrieval (IR) are offered.  ...  This has been achieved by offering an increasingly complex and varied set of evaluation tasks over the years.  ...  mono-and cross-language information on structured scientific data (Domain-Specific) • interactive cross-language retrieval (iCLEF) • multiple language question answering (QA@CLEF) • cross-language retrieval  ... 
dblp:conf/clef/Peters06a fatcat:x2we7dsuurdezntkuqqpjbstqe

Cross-Language Evaluation Forum: Objectives, Results, Achievements

Martin Braschler, Carol Peters
2004 Information retrieval (Boston)  
The Cross-Language Evaluation Forum (CLEF) is now in its fourth year of activity.  ...  We also make proposals for future directions in system evaluation aimed at meeting emerging needs.  ...  Acknowledgments The authors would like to acknowledge the help, support and advice of numerous individuals and organizations which has been invaluable in the organization of the CLEF campaigns.  ... 
doi:10.1023/b:inrt.0000009438.69013.fa fatcat:pu7xyyfwkzhjdh7wptsn3qsuce

Observing users, designing clarity: A case study on the user-centered design of a cross-language information retrieval system

Daniela Petrelli, Micheline Beaulieu, Mark Sanderson, George Demetriou, Patrick Herring, Preben Hansen
2004 Journal of the American Society for Information Science and Technology  
P. (2004) Observing Users -Designing clarity a case study on the user-centred design of a cross-language information retrieval system.  ...  A study involving users (with such searching needs) from the start of the design process is described covering initial examination of user needs and tasks; preliminary design and testing of interface components  ...  Partners are: University of Sheffield (coordinator) (UK), University of Tampere (Finland), SICS -Swedish Institute for Computer Science (Sweden), Alma Media (Finland), BBC Monitoring (UK), and Tilde (Latvia  ... 
doi:10.1002/asi.20036 fatcat:tzn5esookzgwdpec32lbgz3tfe

Workshop on Novel Methodologies for Evaluation in Information Retrieval [chapter]

Mark Sanderson, Martin Braschler, Nicola Ferro, Julio Gonzalo
Lecture Notes in Computer Science  
of scale and diversity.  ...  The workshop is composed of long and short papers covering a range of important evaluation methods and tools.  ...  CONCLUSIONS The iCLEF task has so far provided a substantial body of knowledge around the interactive aspects of Cross-Language Retrieval, but it has failed to engage the cross-lingual information retrieval  ... 
doi:10.1007/978-3-540-78646-7_86 dblp:conf/ecir/SandersonBFG08 fatcat:75ac5kcrvneglnzypu72zcvmka