Filters








2,033 Hits in 3.3 sec

MIRACLE at VideoCLEF 2008: Classification of Multilingual Speech Transcripts

Julio Villena-Román, Sara Lana-Serrano
2008 Conference and Labs of the Evaluation Forum  
We took part in both the main mandatory Classification task that consists in classifying videos of television episodes using speech transcripts and metadata, and the Keyframe Extraction task, whose objective  ...  Regarding the classification task, we ranked 3 rd (out of 6 participants) in terms of precision and 2 nd in terms of recall.  ...  -67407-C03-03 and by Madrid R+D Regional Plan, by means of the project MAVIR (Enhancing the Access and the Visibility of Networked Multilingual Information for the Community of Madrid), S-0505/TIC/000267  ... 
dblp:conf/clef/Villena-RomanL08a fatcat:q7igqx3i4nghlpbfi2b76eicye

SINAI at VideoCLEF 2008

José M. Perea-Ortega, Arturo Montejo-Ráez, María Teresa Martín-Valdivia, Manuel Carlos Díaz-Galiano, Luis Alfonso Ureña López
2008 Conference and Labs of the Evaluation Forum  
This paper describes the first participation of the SINAI research group in the Video-CLEF 2008 track. We have only submitted runs for the classification task on Dutch and English languages.  ...  The experiments show that an IR system can perform well as classifier of multilingual videos, using their speech transcriptions and obtaining good results.  ...  VideoCLEF is a new track for Cross Language Evaluation Forum (CLEF) 2008 and it aims to develop and evaluate tasks in processing video content in a multilingual environment.  ... 
dblp:conf/clef/Perea-OrtegaMMDU08 fatcat:axlkgbtxbfgppbgnkt3kqw7i7i

SINAI at VideoCLEF 2009

José M. Perea-Ortega, Arturo Montejo-Ráez, María Teresa Martín-Valdivia, Luis Alfonso Ureña López
2009 Conference and Labs of the Evaluation Forum  
The results obtained show the expected increase in precision due to the use of metadata in the classification of the test videos.  ...  We have used Support Vector Machines (SVM) as classification algorithm and two experiments have been submitted, using the metadata files and without using them, during the generation of the training corpus  ...  of Andalucía (Spain) under excellence project GeOasis (P08-41999), under project on Tourism (FFIEXP06-TU2301-2007/000024), the Spanish Government under project Text-Mess TIMOM (TIN2006-15265-C06-03) and  ... 
dblp:conf/clef/Perea-OrtegaMMU09 fatcat:4gldixnzmzgf5a6r6hzbjjstxq

CIMWOS: A MULTIMEDIA, MULTIMODAL AND MULTILINGUAL INDEXING AND RETRIEVAL SYSTEM

H. PAPAGEORGIOU, A. PROTOPAPAS
2003 Digital Media Processing for Multimedia Interactive Services  
An ergonomic and user-friendly web-based interface allows the user to efficiently retrieve video segments by a combination of media description, content metadata and natural language text.  ...  Image processing includes video segmentation and key frame extraction, face detection and face identification, object and scene recognition, video text detection and character recognition.  ...  Story Detection and Topic Classification (SD/TC) Story detection (SD) and topic classification (TC) use a set of models trained on an annotated corpus of stories and their associated topics.  ... 
doi:10.1142/9789812704337_0102 fatcat:vjhi65k6x5d43drbjokwqze7r4

Television Heritage and the Semantic Web: Video Active and EUscreen

Johan Oomen, Anna Christaki, Vassilis Tzouvaras
2009 International Conference on Dublin Core and Metadata Applications  
The Video Active project has used the latest advances in Semantic Web technologies in order to provide expressive representation of the metadata, mapping heterogeneous metadata schema in a common metadata  ...  In this three-year project, more fine-grained access to video objects will be provided for, using the EBU Core Set of Metadata, released by the EBU metadata working group at the end of 2008.  ...  Thirdly, Video Active is using multilingual controlled vocabularies for the metadata elements Keywords, Genre and Location.  ... 
dblp:conf/dc/OomenCT09 fatcat:glkfu2chqre4rf7uxmrmcgoy7a

The University of Amsterdam at VideoCLEF 2008

Jiyin He, Xu Zhang, Wouter Weerkamp, Martha A. Larson
2008 Conference and Labs of the Evaluation Forum  
The results of the experimentation showed that archival metadata improves performance of classification, but the addition of speech recognition transcripts in one or both languages does not yield performance  ...  UAms chose to focus on exploiting archival metadata and speech transcripts generated by both the Dutch and English speech recognizers.  ...  We aim to continue refinement of our techniques for classification and retrieval of conversational speech in a multilingual setting.  ... 
dblp:conf/clef/HeZWL08a fatcat:5itgcwi4svfpzmntrjug473cby

Optimization of information retrieval for cross media contents in a best practice network

Pierfrancesco Bellini, Daniele Cenni, Paolo Nesi
2014 International Journal of Multimedia Information Retrieval  
In those cases, the cross media content includes classical file and their metadata plus web pages, events, blog, discussion forums, comments in multilingual.  ...  Recent challenges in information retrieval are related to cross media information in social networks including rich media and web based content.  ...  Acknowledgments The authors want to thank all the partners involved in ECLAP, and the European Commission for funding the project.  ... 
doi:10.1007/s13735-014-0058-8 fatcat:zu3rlocxpfh5tmx4oqeqgurcwe

ECHO: a digital library for historical film archives

Pasquale Savino, Carol Peters
2004 International Journal on Digital Libraries  
However, limitations in information and communication technologies have, so far, prevented the average person from taking much advantage of existing resources.  ...  Wide access to large information collections is of great importance in many aspects of everyday life.  ...  Multilingual User Interface The ECHO film archives are made up of language dependent (speech, text ) and independent (video) media.  ... 
doi:10.1007/s00799-003-0062-8 fatcat:spw3igdfgbepxjgpw26pmw7pqy

Overview of VideoCLEF 2008: Automatic Generation of Topic-Based Feeds for Dual Language Audio-Visual Content [chapter]

Martha Larson, Eamonn Newman, Gareth J. F. Jones
2009 Lecture Notes in Computer Science  
The VideoCLEF track, introduced in 2008, aims to develop and evaluate tasks related to analysis of and access to multilingual multimedia content.  ...  In its first year, VideoCLEF piloted the Vid2RSS task, whose main subtask was the classification of dual language video (Dutchlanguage television content featuring English-speaking experts and studio guests  ...  Acknowledgements This research was supported in part by the E.U. IST programme of the 6th FP for RTD under project MultiMATCH contract IST-033104.  ... 
doi:10.1007/978-3-642-04447-2_119 fatcat:rr2cv7ngmjfxxdncygjv2xfwkq

The architecture of TrueViz: a groundTRUth/metadata editing and VIsualiZing ToolKit

Chang Ha Lee, Tapas Kanungo
2003 Pattern Recognition  
TrueViz reads and stores groundtruth/metadata in XML format, and reads a corresponding image stored in TIFF image le format.  ...  TrueViz reads and stores groundtruth/metadata in XML format, and reads a corresponding image stored in TIFF image le format.  ...  The multilingual data entry, visualization, and search features of TrueViz are quite unique and are discussed in Section 6.  ... 
doi:10.1016/s0031-3203(02)00101-2 fatcat:ncllwtjea5e5jo7drgihlssyuy

Toward human-computer information retrieval

Gary Marchionini
2007 Bulletin of the American Society for Information Science and Technology  
digital video for education and research • 2500+ video segments: MPEG1, MPEG-2, MPEG-4, QuickTime • Multiple visual surrogates • Agile Views Design Framework -Different types of views • Overviews  ...  C P Rules Structures Context Labels Help Start/Stop World RB is Embedded in Larger Process of Information Seeking Open Video Example www.open-video.org • Open access digital library of  ... 
doi:10.1002/bult.2006.1720320508 fatcat:veutjmbv4vgufimuvmyc4kmfuy

Coar Resource Type Vocabulary In The Classification Server Of Phaidra

Sandor Kopacsi
2017 Zenodo  
This presentation was given at COAR 2017 Annual Meeting in Venice (Italy) 8-10 April 2017.  ...  Getty, AGROVOC, Eurovoc, ÖFOS, COAR RTV). • our relevant vocabularies and classifications can be stored, organized and served in a classification server Goals of the Classification Server • metadata  ...  and classificationsclassifications in many topics are already available (e.g.  ... 
doi:10.5281/zenodo.579874 fatcat:2u4nx6lx3jbwffahiswbqqeg5a

Greenstone digital library software

David Bainbridge, Ian H. Witten
2004 Proceedings of the 2004 joint ACM/IEEE conference on Digital libraries - JCDL '04  
Greenstone is international and multilingual: it is widely used in many different countries; interfaces and collections exist in many of the world's languages; and it is being distributed by UNESCO as  ...  Additionally, far larger volumes of information may be associated with a collection-typically audio, image, and video.  ...  Date metadata can be presented in a list that allows selection by year and month.  ... 
doi:10.1145/996350.996483 dblp:conf/jcdl/BainbridgeW04 fatcat:jft6xj7kbfex7ol2kv2eyk2is4

Automatic Truecasing of Video Subtitles Using BERT: A Multilingual Adaptable Approach [chapter]

Ricardo Rei, Nuno Miguel Guerreiro, Fernando Batista
2020 Communications in Computer and Information Science  
We have also created a versatile multilingual model, and the conducted experiments show that good results can be achieved both for monolingual and multilingual data.  ...  Finally, we applied domain adaptation by finetuning models, initially trained on general written data, on video subtitles, revealing gains over other approaches not only in performance but also in terms  ...  Section 5 presents the results achieved, both on a generic domain (monolingual and multilingual) and in the specific domain of video subtitles.  ... 
doi:10.1007/978-3-030-50146-4_52 fatcat:cw4hwgj32bghhcjd3ssqyiooye

Video2Text: Learning to Annotate Video Content

Hrishikesh Aradhye, George Toderici, Jay Yagnik
2009 2009 IEEE International Conference on Data Mining Workshops  
It analyzes audiovisual features of 25 million YouTube.com videos -nearly 150 years of video data -effectively searching for consistent correlation between these features and text metadata.  ...  While training, our method does not assume any explicit manual annotation other than the weak labels already available in the form of video title, description, and tags.  ...  and noisy, multilingual text metadata.  ... 
doi:10.1109/icdmw.2009.79 dblp:conf/icdm/AradhyeTY09 fatcat:ydpvgevy2bannfikptiejgdelu
« Previous Showing results 1 — 15 out of 2,033 results