Filters








6 Hits in 3.6 sec

IIT at TREC-8: Improving Baseline Precision

M. Catherine McCabe, David O. Holmes, Kenneth L. Alford, Abdur Chowdhury, David A. Grossman, Ophir Frieder
1999 Text Retrieval Conference  
This year, we focussed on improving our baseline and then introduced some experimental improvements.  ...  In TREC-8, we participated in the automatic and manual tracks for category A as well as the small web track.  ...  Acknowledgments We wish to thank the director and staff at the Major Shared Resource Center, U.S.  ... 
dblp:conf/trec/McCabeHACGF99 fatcat:phdg62iukjhp7d64wwlwbcskuu

The TREC-2001 Cross-Language Information Retrieval Track: Searching Arabic Using English, French or Arabic Queries

Fredric C. Gey, Douglas W. Oard
2001 Text Retrieval Conference  
On average, forty percent of the relevant documents discovered by a participating team were found by no other team, a higher rate than normally observed at TREC.  ...  Acknowledgments We are grateful to Ellen Voorhees for coordinating this track at NIST and for her extensive assistance with our preliminary analysis, to the participating research teams for their advice  ...  Stems generated from the same root typically have related meanings, so indexing roots might improve recall (possibly at the expense of precision, though).  ... 
dblp:conf/trec/GeyO01 fatcat:cwkg2mal3zgrpewhdtyo333tea

MSIR@FIRE: A Comprehensive Report from 2013 to 2016

Somnath Banerjee, Monojit Choudhury, Kunal Chakma, Sudip Kumar Naskar, Amitava Das, Sivaji Bandyopadhyay, Paolo Rosso
2020 SN Computer Science  
This document is a comprehensive report on the 4 years of MSIR track evaluated at FIRE between 2013 and 2016.  ...  MSIR track was first introduced in 2013 at FIRE and the aim of MSIR was to systematically formalize several research problems that one must solve to tackle the code mixing in Web search for users of many  ...  The success of TREC, 8 CLEF, 9 and NTCIR 10 has clearly established the importance of an evaluation workshop that facilitates research by providing the data and a common forum for comparing models and  ... 
doi:10.1007/s42979-019-0058-0 fatcat:z5ojljqkkfatph46hzjj6epnny

Collection statistics for fast duplicate document detection

Abdur Chowdhury, Ophir Frieder, David Grossman, Mary Catherine McCabe
2002 ACM Transactions on Information Systems  
We compared our solution to the state of the art and found that in addition to improved accuracy of detection, our approach executed in roughly one-fifth the time.  ...  Seventeen inconsistencies were detected in TREC 8 and 65 inconsistencies were noted for the web track of TREC 8.  ...  Similar examples were found for TREC 7 and 8 and the web track of TREC 8.  ... 
doi:10.1145/506309.506311 fatcat:eilrac57nfgwnagybd3n2jyb2a

Overview of the TREC 2006

Ellen M. Voorhees
2006 Text Retrieval Conference  
A similar investigation of the TREC-8 ad hoc collection showed that every automatic run that had a mean average precision score of at least 0.1 had a percentage difference of less than 1 % between the  ...  At varying cut-off levels, recall and precision tend to be inversely related since retrieving more documents will usually increase recall while degrading precision and vice versa.  ... 
dblp:conf/trec/Voorhees06 fatcat:5olfha4lxvcqbahxz7rl6bldha

Query Refinement in Similarity Retrieval Systems

Kaushik Chakrabarti, Michael Ortega, Kriengkrai Porkaew, Sharad Mehrotra
2001 IEEE Data Engineering Bulletin  
Experiments with the model for spreading activation In this subsection, we present our experiments done on the CACM and Trec'8 web test-collections.  ...  ¯It reaches good precision and recall, after few iterations. For instance, with all queries, « yielded at least 80% precision at 50% recall with 10 iterations.  ...  He is the current Vice Chair of the IEEE TCDE and the Chair of the ICDE Steering Committee and also a member of the JCDL at ECDL Steering Committees. He is a Senior Member of IEEE. ELECTION BALLOT  ... 
dblp:journals/debu/ChakrabartiOPM01 fatcat:kwrajv6a2rbr5kng7lmk66a5wu