Filters








8 Hits in 6.6 sec

UNED@ImageCLEF 2004: Using Image Captions Structure and Noun Phrase Based Query Expansion for Cross-Language Image Caption Retrieval

Víctor Peinado, Javier Artiles, Fernando López-Ostenero, Julio Gonzalo, Felisa Verdejo
2004 Conference and Labs of the Evaluation Forum  
Two different strategies are attempted: a) Expanding and translating queries with noun phrases, and b) Performing structured searches using entities located in topic titles over image caption fields.  ...  Our best result (using only structured searches over image captions) obtains 88.18% regarding the monolingual experiment and behaves a 8.3% better than Pirkola's structured queries.  ...  Acknowledgements This work has been partially supported by a grant from the Spanish Government, project R2D2 (TIC2003-07158-C0401), and a grant from the UNED (Universidad Nacional de Educación a Distancia  ... 
dblp:conf/clef/PeinadoALGV04 fatcat:vhapfnorgvfnfbeukhkllpzvae

Assessing Translation Quality for Cross Language Image Retrieval [chapter]

Paul Clough, Mark Sanderson
2004 Lecture Notes in Computer Science  
Like other cross language tasks, we show that the quality of the translation resource, among other factors, has an effect on retrieval performance.  ...  Using data from the ImageCLEF test collection, we investigate the relationship between translation quality and retrieval performance when using Systran, a machine translation (MT) system, as a translation  ...  Acknowledgments We would like to thank members of the NLP group and Department of Information Studies for their time and effort in producing manual assessments.  ... 
doi:10.1007/978-3-540-30222-3_57 fatcat:ogfhg2u22bavfadaxghqttlc7q

Caption and Query Translation for Cross-Language Image Retrieval [chapter]

Paul Clough
2005 Lecture Notes in Computer Science  
Image retrieval is achieved through matching textual queries to associated image captions for the following languages: French, German, Spanish and Italian using commercially and publicly available resources  ...  In this paper, we evaluate query versus document translation for the ImageCLEF 2004 bilingual ad hoc retrieval task.  ...  Acknowledgements We would like to thank Jianqiang Wang and Doug Oard from Maryland University for translating the ImageCLEF captions into Spanish, Italian, French and German.  ... 
doi:10.1007/11519645_60 fatcat:bbrbspnhujfplc6u2jhbvrntjm

User experiments with the Eurovision cross-language image retrieval system

Paul Clough, Mark Sanderson
2006 Journal of the American Society for Information Science and Technology  
In this paper we present Eurovision, a text-based system for cross-language (CL) image retrieval.  ...  Based on the two search tasks and user feedback, we describe important aspects of any CL image retrieval system.  ...  Thanks also to Hideo Joho for help and support with the GLASS system and concept hierarchies.  ... 
doi:10.1002/asi.20331 fatcat:pg5psaubxfeddjm3mtfg3t7v7e

Sheffield at ImageCLEF 2003

Paul D. Clough, Mark Sanderson
2003 Conference and Labs of the Evaluation Forum  
In this paper, we use the Systran machine translation system for translating queries for cross language image retrieval in a pilot experiment at CLEF 2003, called ImageCLEF.  ...  We discuss the kinds of translation errors encountered during this analysis and show the impact on retrieval effectiveness for individual queries in the ImageCLEF task.  ...  Thanks also to Hideo Joho for help and support with the GLASS system, and in particular his modified BM25 ranking algorithm, and thanks to NTU for providing Chinese versions of the ImageCLEF titles.  ... 
dblp:conf/clef/CloughS03c fatcat:rntqfy5g35fg5fqchpuzqwybwi

Comparative Evaluation of Cross-language Information Retrieval Systems [chapter]

Carol Peters
2005 Lecture Notes in Computer Science  
In recognition of this, five years ago, the DELOS Network for Digital Libraries launched the Cross-Language Evaluation Forum (CLEF), with the objective of promoting multilingual information access by providing  ...  of interest wherever and however it is stored, regardless of form or language.  ...  For example, the way in which a query is formulated, the methods used for retrieval (e.g. based on low-level features derived from an image, or based on associated textual information such as a caption  ... 
doi:10.1007/978-3-540-31842-2_16 fatcat:4ad3pwr6pnhr5n6v3gwzsqd4cu

Learning from Multimodal Web Data

John Miles Hessel
2020
captioning dataset (because post-hoc annotators only mention "London" in captions if the image is iconically so), but not in a Flickr image tagging dataset (because users may tag any image that happens  ...  While these results show that multimodal web data can be leveraged for building more powerful machine learning-based tools, the communicative intent of multimodal posts, which extend significantly beyond  ...  Cross-modal Search+Retrieval: Information needs for users are evolving: an ideal search engine should be able to support both multimodal queries and responses (Jeon et al., 2003; Rasiwasia et al., 2010  ... 
doi:10.7298/fzce-qv86 fatcat:limoc6b6xjgm5b2dbzh3f72tuq

Dagstuhl Reports, Volume 7, Issue 7, July 2017, Complete Issue [article]

2018
With regard to the input and output modalities and languages, support should be provided for modality-preserving and cross-modal and/or cross-lingual use-cases.  ...  We have demonstrated two tasks: 1) Given a query image (without any accompanying text), we retrieve textual descriptions that correspond to the visual attributes in the visual query; and 2) given a textual  ...  QasemiZadeh, Behrang, and Anne-Kathrin Schumann. "The ACL RD-TEC 2.0: A Language Resource for Evaluating Term Extraction and Entity Recognition Methods." In LREC. 2016.  ... 
doi:10.4230/dagrep.7.7 fatcat:ve4n3wvvk5bkfgae36nsto7sre