Filters








758 Hits in 2.3 sec

INEX 2007 Evaluation Measures [chapter]

Jaap Kamps, Jovan Pehcevski, Gabriella Kazai, Mounia Lalmas, Stephen Robertson
Lecture Notes in Computer Science  
This paper describes the official measures of retrieval effectiveness that are employed for the Ad Hoc Track at INEX 2007.  ...  In response, the INEX 2007 measures are based on the amount of highlighted text retrieved, leading to natural extensions of the well-established measures of precision and recall.  ...  At INEX 2007 we have adopted an evaluation framework that is based on the amount of highlighted text in relevant documents (similar to the HiXEval measures [15] ).  ... 
doi:10.1007/978-3-540-85902-4_2 fatcat:iuawztbdhvb2firskwth6zpfwq

INEX 2005 Evaluation Measures [chapter]

Gabriella Kazai, Mounia Lalmas
2006 Lecture Notes in Computer Science  
This paper describes the official measures of retrieval effectiveness that are employed for the Ad Hoc Track at INEX 2007.  ...  In response, the INEX 2007 measures are based on the amount of highlighted text retrieved, leading to natural extensions of the well-established measures of precision and recall.  ...  At INEX 2007 we have adopted an evaluation framework that is based on the amount of highlighted text in relevant documents (similar to the HiXEval measures [15] ).  ... 
doi:10.1007/978-3-540-34963-1_2 fatcat:7x6o3owp4zb2llreycldu673aa

Evaluation effort, reliability and reusability in XML retrieval

Sukomal Pal, Mandar Mitra, Jaap Kamps
2010 Journal of the American Society for Information Science and Technology  
Since 2007, INEX has been using a set of precision-recall based metrics for its ad hoc tasks.  ...  The authors investigate the reliability and robustness of these focused retrieval measures, and of the INEX pooling method.  ...  Various evaluation measures have been tried over the years at INEX.  ... 
doi:10.1002/asi.21403 fatcat:zusa7ro7brgwfpnnkxpvu3degq

Topic Difficulty Prediction in Entity Ranking [chapter]

Anne-Marie Vercoustre, Jovan Pehcevski, Vladimir Naumovski
2009 Lecture Notes in Computer Science  
Many approaches to entity ranking have been proposed, and most of them were evaluated on the INEX Wikipedia test collection.  ...  To predict the topic difficulty, we generate a classifier that uses features extracted from an INEX topic definition to classify the topic into an experimentally pre-determined class.  ...  This was the best performing entity ranking run at INEX 2007 (for the list completion task) when using the MAP measure. 2.  ... 
doi:10.1007/978-3-642-03761-0_29 fatcat:qggfph7cmzdidoxssqxql7sg6m

XML Multimedia Retrieval: From Relevant Textual Information to Relevant Multimedia Fragments [chapter]

Mouna Torjmen, Karen Pinel-Sauvagnat, Mohand Boughanem
2009 Lecture Notes in Computer Science  
Experiments were done on the INEX 2006 and 2007 Multimedia Fragments task and show the interest of our method.  ...  Image retrieval is done using textual and structural information from ascendant, sibling and direct descendant nodes in the XML tree, while multimedia fragment retrieval is done by evaluating the score  ...  In both tasks, INEX 2006 and INEX 2007 the impact of γ variation in Equation 7 on INEX 2006 results according to iP[0.01] measure. Highest performance is obtained with Table 1.  ... 
doi:10.1007/978-3-642-00958-7_16 fatcat:vgyyfjy7rze2lng65iyogmfx3m

Overview of the INEX 2008 Ad Hoc Track [chapter]

Jaap Kamps, Shlomo Geva, Andrew Trotman, Alan Woodley, Marijn Koolen
2009 Lecture Notes in Computer Science  
This paper gives an overview of the INEX 2007 Ad Hoc Track.  ...  The INEX 2007 Ad Hoc Track featured three tasks: For the Focused Task a ranked-list of non-overlapping results (elements or passages) was needed.  ...  We note that properly evaluating the effectiveness in XML-IR remains an ongoing research question at INEX. The INEX 2007 measures are solely based on the retrieval of highlighted text.  ... 
doi:10.1007/978-3-642-03761-0_1 fatcat:exrtt2h6gzdjxmoqiodqhrhmhy

Report on the SIGIR 2007 workshop on focused retrieval

Andrew Trotman, Shlomo Geva, Jaap Kamps
2007 SIGIR Forum  
On the 27th July 2007 the SIGIR 2007 Workshop on Focused Retrieval was held as part of SIGIR in Amsterdam, the Netherlands.  ...  [2] discuss the new link-the-wiki task running at INEX 2007.  ...  A simplified version of this task is being run for the first time at INEX 2007, with the expectation of a full task in 2008 and beyond.  ... 
doi:10.1145/1328964.1328981 fatcat:3qi4wek47zffha4k7l2y5xkw4m

Investigating the document structure as a source of evidence for multimedia fragment retrieval

Mouna Torjmen-Khemakhem, Karen Pinel-Sauvagnat, Mohand Boughanem
2013 Information Processing & Management  
We conducted several experiments in the context of the Multimedia track of the INEX evaluation campaign.  ...  Evaluation measure We evaluated our method in the focused strategy using the official measure of INEX (Kamps et al., 2007) .  ...  Parameter Best value learned from INEX 2006 (used for evaluation on the 2007 one) -Focused task Best value learned from INEX 2007 (used for evaluation on the 2006 one) -Thorough task / in Eq. (1  ... 
doi:10.1016/j.ipm.2013.06.001 fatcat:ac4vtvzcorbu7mi33kwuv2wkla

Evaluating relevant in context

Jaap Kamps, Mounia Lalmas, Jovan Pehcevski
2007 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR '07  
The resulting measure was used at INEX 2006.  ...  Our main research question is: how to evaluate the Relevant in Context task?  ...  At INEX 2007, systems will be allowed to return arbitrary passages that can be directly evaluated by the MAgP measure, which in turn could enable a system to receive a perfect score when exactly matching  ... 
doi:10.1145/1277741.1277890 dblp:conf/sigir/KampsLP07 fatcat:wfsfda3vxremhkzhrhonm4tzke

Overview of the INEX 2009 Ad Hoc Track [chapter]

Shlomo Geva, Jaap Kamps, Miro Lethonen, Ralf Schenkel, James A. Thom, Andrew Trotman
2010 Lecture Notes in Computer Science  
This paper gives an overview of the INEX 2007 Ad Hoc Track.  ...  The INEX 2007 Ad Hoc Track featured three tasks: For the Focused Task a ranked-list of non-overlapping results (elements or passages) was needed.  ...  We note that properly evaluating the effectiveness in XML-IR remains an ongoing research question at INEX. The INEX 2007 measures are solely based on the retrieval of highlighted text.  ... 
doi:10.1007/978-3-642-14556-8_4 fatcat:bdnyqr63bzdzxkxmbkma4mjpqm

Topical and Structural Linkage in Wikipedia [chapter]

Kelly Y. Itakura, Charles L. A. Clarke, Shlomo Geva, Andrew Trotman, Wei Chi Huang
2011 Lecture Notes in Computer Science  
Performance INEX 2007 participants generated ranked lists of possible links for each article, which were evaluated against Wikipedia ground truth using standard recall and precision measures.  ...  Performance of the structural threshold algorithm at INEX 2007 equivalent task [8] .  ... 
doi:10.1007/978-3-642-20161-5_45 fatcat:ny27wsutevhkjapzc7xjl3i2ly

Report on INEX 2008

Gianluca Demartin, Gabriella Kazai, Marijn Koolen, Monica Landoni, Ragnar Nordlie, Nils Pharo, Ralf Schenkel, Martin Theobald, Andrew Trotman, Arjen P. de Vries, Alan Woodley, Ludovic Denoye (+8 others)
2009 SIGIR Forum  
INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results  ...  This paper reports on the INEX 2008 evaluation campaign, which consisted of a wide range of tracks: Ad hoc, Book, Efficiency, Entity Ranking, Interactive, QA, Link the Wiki, and XML Mining. • Link-the-Wiki  ...  In 2007, INEX has started the XML Entity Ranking track (INEX-XER) to provide a forum where researchers may compare and evaluate techniques for systems that return lists of entities.  ... 
doi:10.1145/1670598.1670603 fatcat:zoxbecrybrf63fg54g4kt7w7na

Using textual and structural context for searching Multimedia Elements

Mouna Torjmen, Karen Pinel Sauvagnat, Mohand Boughanem
2010 International Journal of Business Intelligence and Data Mining  
Experimental evaluation is carried out using the INEX MultimediaFragments Task 2006 and 2007.  ...  By comparing our proposed measure (RepOnt) and the Rada measure, we observed that the difference of results between the two measures is very significant in the INEX 2007 test set, whereas it is not the  ...  We obtained for INEX 2006 a MAP equals to 0.4030 (instead of 0.3965) and for INEX 2007 a MAP equals to 0.2815 (instead of 0.2682).  ... 
doi:10.1504/ijbidm.2010.036123 fatcat:e4iyijg4wrcy7f7pphywqjs4oq

Overview of the INEX 2009 Entity Ranking Track [chapter]

Gianluca Demartini, Tereza Iofciu, Arjen P. de Vries
2010 Lecture Notes in Computer Science  
The XML Entity Ranking (XER) track at INEX creates a discussion forum aimed at standardizing evaluation procedures for entity retrieval.  ...  This paper describes the XER tasks and the evaluation procedure used at the XER track in 2009, where a new version of Wikipedia was used as underlying collection; and summarizes the approaches adopted  ...  Since 2007, INEX has organized a yearly XML Entity Ranking track (INEX-XER) to provide a forum where researchers may compare and evaluate techniques for engines that return lists of entities.  ... 
doi:10.1007/978-3-642-14556-8_26 fatcat:34llptermngh7lbjcq3ihn4vdq

Presenting Structured Text Retrieval Results [chapter]

Peter M. D. Gray, Todd Eavis, Alfred Inselberg, Patrick Valduriez, Patrick Valduriez, Goetz Graefe, Goetz Graefe, Hans Zeller, Goetz Graefe, Esther Pacitti, Christoph Koch, Rui Zhang (+81 others)
2009 Encyclopedia of Database Systems  
CROSS REFERENCE Evaluation metrics for structured text retrieval INitiative for the Evaluation of XML Retrieval (INEX) XML Retrieval  ...  (INEX 2006 (INEX -2007 explicitly asks for a single best entry point into the article (so non-overlapping and non-scattered articles by definition).  ... 
doi:10.1007/978-0-387-39940-9_317 fatcat:fnsl7hpfjfcixo7rwvbl6l3vhy
« Previous Showing results 1 — 15 out of 758 results