Filters








156 Hits in 3.4 sec

The LIG multi-criteria system for video retrieval

Stéphane Ayache, Georges Quénot
2008 Proceedings of the 2008 international conference on Content-based image and video retrieval - CIVR '08  
The LIG search system uses a user-controlled combination of five criteria: keywords, similarity to example images, semantic categories, similarity to already identified positive images, and temporal closeness  ...  A relevance is computed for each shot as the maximum of the relevances associated to each key frame (or subshot). Figure 1 : 1 The LIG video retrieval system Criterion 1: Keyword based search.  ...  Temporal closeness (within the video stream) to already retrieved images can be used for the search.  ... 
doi:10.1145/1386352.1386429 dblp:conf/civr/AyacheQ08 fatcat:pbry74wotrbkjm7mvlbxi4ifeq

The LIG multi-criteria system for video retrieval

Stéphane Ayache, Georges Quénot, Laurent Besacier
2009 Proceeding of the ACM International Conference on Image and Video Retrieval - CIVR '09  
The LIG search system uses a user-controlled combination of six criteria: keywords, phonetic string, similarity to example images, semantic categories, similarity to already identified positive images,  ...  THE LIG VIDEO RETRIEVAL SYSTEM The LIG search system [1] uses a user-controlled combination of six criteria: keywords, phonetic string (new in 2009), similarity to example images, semantic categories  ...  Figure 1 : 1 The LIG video retrieval system Criterion 1: Keyword based search. The keyword based search is done using a vector space model.  ... 
doi:10.1145/1646396.1646460 dblp:conf/civr/AyacheQB09 fatcat:5jv6ubzehngk7f4ixdy5d576da

Audio-Video Analysis of Musical Expressive Intentions [chapter]

Ingrid Visentini, Antonio Rodà, Sergio Canazza, Lauro Snidaro
2011 Lecture Notes in Computer Science  
The results demonstrate that the visual component aids the subjects to better recognize the different expressive intentions of the musical performances, showing that the fusion of audio-visual information  ...  can significantly improve the degree of recognition given by single means.  ...  Moreover, the user may not know exactly what document he/she is looking for, but might want to browse the audio-visual library to search for a musical performance that meets certain criteria: for example  ... 
doi:10.1007/978-3-642-24088-1_23 fatcat:3vqtbjqjvjbxxjkbbqcgse35fi

IRIM at TRECVID 2012 : Semantic Indexing and Instance Search

Nicolas Ballas, Benjamin Labbé, Aymen Shabou, Hervé Le Borgne, Philippe Gosselin, Miriam Redi, Bernard Mérialdo, Hervé Jégou, Jonathan Delhumeau, Rémi Vieux, Boris Mansencal, Jenny Benois-Pineau (+25 others)
2012 TREC Video Retrieval Evaluation  
For the semantic indexing task, our approach uses a six-stages processing pipelines for computing scores for the likelihood of a video shot to contain a target concept.  ...  Then a two-step fusion method is used to combine these individual results and obtain a score for the likelihood of an instance to appear in a video clip.  ...  Acknowledgments This work has been carried out in the context of the IRIM (Indexation et Recherche d'Information Multimédia) of the GDR-ISIS research network from CNRS.  ... 
dblp:conf/trecvid/BallasLSBGRMJDV12 fatcat:37civg6n4rcanhtdaaoocj5dpm

Media objects for user-centered similarity matching

Jean Martinet, Shin'ichi Satoh, Yves Chiaramella, Philippe Mulhem
2008 Multimedia tools and applications  
The aim of our work is to define media objects for document description suited to images and videos, integrating a usercentered definition of importance for similarity matching.  ...  The importance is defined according to criteria and hypotheses, which have been experimentally validated.  ...  Many initial results have been achieved on content-based image and video retrieval systems considering mainly low-level features to represent the content of visual documents for retrieval purposes.  ... 
doi:10.1007/s11042-008-0200-9 fatcat:6a4vvyyzafgt5bkbrh4a2co3yq

[Invited Paper] TRECVid Semantic Indexing of Video: A 6-Year Retrospective

George Awad, Cees G. M. Snoek, Alan F. Smeaton, Georges Quénot
2016 ITE Transactions on Media Technology and Applications  
As with the previous High-Level Feature detection task which ran from 2002 to 2009, the semantic indexing task aims at evaluating methods and systems for detecting visual, auditory or multi-modal concepts  ...  Semantic indexing, or assigning semantic tags to video samples, is a key component for content-based access to video documents and collections.  ...  the best available for the purpose.  ... 
doi:10.3169/mta.4.187 fatcat:hcibmza4gzf3naiuvxwop7wgjm

Rushes summarization by IRIM consortium

Georges Quénot, Lionel Granjon, Denis Pellerin, Michèle Rombaut, Stephane Ayache, Jenny Benois-Pineau, Boris Mansencal, Eliana Rossi, Matthieu Cord, Frederic Precioso, David Gorisse, Patrick Lambert (+1 others)
2008 Proceeding of the 2nd ACM workshop on Video summarization - TVS '08  
9] In this paper, we present the first participation of a consortium of French laboratories, IRIM, to the TRECVID 2008 BBC Rushes Summarization task. Our approach resorts to video skimming.  ...  We are actually combining the cut detection method with our content-based search engine [14] previously developed for image retrieval in order to carry out an interactive content-based video analysis  ...  summary size (secs) • TT -total time spent judging the inclusions (secs) For the DU and XD criteria, our results are not good.  ... 
doi:10.1145/1463563.1463577 dblp:conf/mm/QuenotBMRCPGLAGPRA08 fatcat:5ydxxnrxjzf4hfg7wwjk44t7za

VITALAS at TRECVID-2009

Christos Diou, Nikos Dimitriou, Panagiotis Panagiotopoulos, Christos Papachristou, Anastasios Delopoulos, George Stephanopoulos, Henning Rode, Theodora Tsikrika, Arjen P. de Vries, Daniel Schneider, Jochen Schwenninger, Marie-Luce Viaud (+17 others)
2009 TREC Video Retrieval Evaluation  
ACKNOWLEDGMENTS This work was supported by the EU-funded VITALAS project (FP6-045389). Christos Diou is supported by the Greek State Scholarships Foundation (http://www.iky.gr).  ...  This system has been built by the partners of the VITALAS EU-funded research project which has been developing a video and image retrieval system for large collections that integrates different search  ...  The concepts were indexed by the PF/Tijah retrieval system [12] , which is also used for the concept retrieval.  ... 
dblp:conf/trecvid/DiouDPPDSRTVSSV09 fatcat:rykilybqwvhd3hxza67xa7cgd4

TRECVID 2012 - An Overview of the Goals, Tasks, Data, Evaluation Mechanisms and Metrics

Paul Over, Jonathan G. Fiscus, Gregory A. Sanders, Barbara Shaw, George Awad, Martial Michel, Alan F. Smeaton, Wessel Kraaij, Georges Quénot
2012 TREC Video Retrieval Evaluation  
Introduction The TREC Video Retrieval Evaluation (TRECVID) 2012 was a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of  ...  The top 10 performing systems form two clusters (ECNU, AXES, and TokyoTechCanon) vs. the other 7. The clusters highlight the importance of specifying a common threshold selection criteria.  ... 
dblp:conf/trecvid/OverFSSAMSKQ12 fatcat:n7bxryyugzeolnckapizmtmnni

AI in France: History, Lessons Learnt, State of the Art and Future [chapter]

Eunika Mercier-Laurent
2009 Lecture Notes in Computer Science  
It also introduces AFIA, the French Association for AI, and describes some activities such as the main conferences and publications.  ...  This chapter begins by a short history of AI in France since the early 1970s. It gives some examples of industrial applications developed since the 1980s.  ...  Further information is available from fcab@ieee.org. o The Decision team works on decision under uncertainty, multi-criteria decision making and context-based decision aiding. o The AnimatLab is devoted  ... 
doi:10.1007/978-3-642-03226-4_6 fatcat:ebymircaobhzxcgiiyhbou3xvq

Experiments in Lifelog Organisation and Retrieval at NTCIR [chapter]

Cathal Gurrin, Hideo Joho, Frank Hopfgartner, Liting Zhou, Rami Albatal, Graham Healy, Duc-Tien Dang Nguyen
2020 Evaluating Information Retrieval and Access Tasks  
The Lifelog task ran for over 4 years from NTCIR-12 until NTCIR-14 (2015.02-2019.06); it supported participants to submit to five subtasks, each tackling a different challenge related to lifelog retrieval  ...  Finally, the lessons learned and challenges within the domain of lifelog retrieval are presented.  ...  Acknowledgements Many thanks to the editors and all authors of this book, and to the present and past organisers and participants of the NTCIR tasks.  ... 
doi:10.1007/978-981-15-5554-1_13 fatcat:fhsvm3teibelblxn2qapgfbhue

Towards a Better Integration of Written Names for Unsupervised Speakers Identification in Videos

Johann Poignant, Hervé Bredin, Laurent Besacier, Georges Quénot, Claude Barras
2013 Conference of the International Speech Communication Association  
Existing methods for unsupervised identification of speakers in TV broadcast usually rely on the output of a speaker diarization module and try to name each cluster using names provided by another source  ...  While "late naming" relies on a speaker diarization module optimized for speaker diarization, "integrated naming" jointly optimize speaker diarization and name propagation in terms of identification errors  ...  For this task we used LOOV [13] (LIG Overlaid OCR in Video). This system has been previously evaluated on another broadcast news corpus with low-resolution videos.  ... 
dblp:conf/interspeech/Poignant13 fatcat:srdgpzx7xbgyrghyq27kikyusq

Ensemble Learning with LDA Topic Models for Visual Concept Detection [chapter]

Sheng Tang, Yan-Tao Zheng, Gang Cao, Yong-Dong Zhang, Jin-Tao Li
2012 Multimedia - A Multidisciplinary Approach to Complex Issues  
discuss their approaches, and hence can be widely regarded as the actual standard for performance evaluation of concept based video retrieval systems (Snoek & Worring, 2009 ).  ...  Our preliminary results on the TREC Video Retrieval Evaluation (TRECVid) benchmark can be found in (Tang et al., 2008) , and preliminary results on pornography detection for online videos can be found  ... 
doi:10.5772/37716 fatcat:34ko6xuaqbbhldqnwxvpslrak4

Simulation in contexts involving an interactive table and tangible objects

Sebastien Kubicki, Yoann Lebrun, Sophie Lepreux, Emmanuel Adam, Christophe Kolski, René Mandiau
2013 Simulation modelling practice and theory  
The Multi-Agent System proposed in this paper is modelled according to an architecture adapted to the exploitation of tangible and virtual objects during simulation on an interactive table.  ...  The TangiSense interactive table is presented; it is connected to a Multi-Agent System making it possible to give the table a certain level of adaptation: each tangible object can be associated to an agent  ...  The authors would like to thank the partners with whom we collaborated on the TTT and IMAGIT projects: LIG, RFIdées, the CEA and Supertec.  ... 
doi:10.1016/j.simpat.2012.10.012 fatcat:oe3oixrz6nfr7fzzludrg75a3i

Sparse Ensemble Learning for Concept Detection

Sheng Tang, Yan-Tao Zheng, Yu Wang, Tat-Seng Chua
2012 IEEE transactions on multimedia  
This work presents a novel sparse ensemble learning scheme for concept detection in videos.  ...  individual classifiers in each locality for final classification.  ...  as the actual standard for performance evaluation of concept-based video retrieval systems [37] .  ... 
doi:10.1109/tmm.2011.2168198 fatcat:yg5dvk75qvgilgsxvdrslblguu
« Previous Showing results 1 — 15 out of 156 results