A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
Recommendations for video event recognition using concept vocabularies
2013
Proceedings of the 3rd ACM conference on International conference on multimedia retrieval - ICMR '13
We consider the recommendations for video event recognition using concept vocabularies the most important contribution of the paper, as they provide guidelines for future work. ...
We conclude that for concept banks it pays to be informative.
Recommendations for Video Event Recognition using Concept Vocabularies Amirhossein Habibian, Koen E. A. van de Sande, and Cees G. M. ...
doi:10.1145/2461466.2461482
dblp:conf/mir/HabibianSS13
fatcat:fkmjirpp3nbfjcbbkwp6we4eqi
Recommendations for recognizing video events by concept vocabularies
2014
Computer Vision and Image Understanding
Representing videos using vocabularies composed of concept detectors appears promising for generic event recognition. ...
We consider the recommendations for recognizing video events by concept vocabularies the most important contribution of the paper, as they provide guidelines for future work. ...
The authors thank Dennis Koelma and Koen E.A. van de Sande for providing concept detectors. ...
doi:10.1016/j.cviu.2014.02.003
fatcat:njt64s6lcrcdvcpoognocfqdru
Semantic Reasoning in Zero Example Video Event Retrieval
2017
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
In our paper, we present our Semantic Event Retrieval System which 1) shows the importance of high-level concepts in a vocabulary for the retrieval of complex and generic high-level events and 2) uses ...
to pre-train (Vocabulary challenge); 2) which pre-trained concept detectors are relevant for a certain unseen high-level event (Concept Selection challenge). ...
ACKNOWLEDGMENTS We would like to thank the technology program Making Sense of Big Data (MSoBD) for their financial support. ...
doi:10.1145/3131288
fatcat:x3yghkfjarerdlqoek3meyrrmm
Composite Concept Discovery for Zero-Shot Video Event Detection
2014
Proceedings of International Conference on Multimedia Retrieval - ICMR '14
We consider automated detection of events in video without the use of any visual training examples. ...
We demonstrate that by combining concepts into composite concepts, we can train more accurate classifiers for the concept vocabulary, which leads to improved zero-shot event detection. ...
concepts recommended in [5] . ...
doi:10.1145/2578726.2578746
dblp:conf/mir/HabibianMS14
fatcat:e57beeb4rzgpfjdvmg4zdwbvxm
Bridging the Ultimate Semantic Gap
2015
Proceedings of the 5th ACM on International Conference on Multimedia Retrieval - ICMR '15
This paper presents a state-of-the-art system for event search without any textual metadata or example videos. ...
The system relies on substantial video content understanding and allows for semantic search over a large collection of videos. ...
It used the Blacklight system at the Pittsburgh Supercomputing Center (PSC). ...
doi:10.1145/2671188.2749399
dblp:conf/mir/JiangYMMH15
fatcat:xtpausagpbbwjod4hajip3wwae
Large-Scale Concept Ontology for Multimedia
2006
IEEE Multimedia
video. ...
We describe a recent collaborative undertaking to develop the Large-Scale Concept Ontology for Multimedia. ...
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the US government. ...
doi:10.1109/mmul.2006.63
fatcat:uf5jsedg2fe3da7rwoyfwjb4fu
Audio-visual grouplet
2011
Proceedings of the 19th ACM international conference on Multimedia - MM '11
The AVGs carry unique audio-visual cues to represent the video content, based on which an audiovisual dictionary can be constructed for concept classification. ...
By using the entire AVGs as building elements, the audio-visual dictionary is much more robust than traditional vocabularies that use discrete audio or visual codewords. ...
All of these types of AVGs are useful for video concept classification. ...
doi:10.1145/2072298.2072316
dblp:conf/mm/JiangL11
fatcat:x2v3gpujtnafhgxzofrdilqh74
ViTS: Video Tagging System from Massive Web Multimedia Collections
2017
2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
ViTS is an industrial product under exploitation with a vocabulary of over 2.5M concepts, capable of indexing more than 150k videos per month. ...
(10.04 tags/video), with an accuracy of 80,87%. ...
Sports-1M [12] (1M videos and ∼500 labels) for sport recognition, ActivityNet [8] (20k videos and ∼200 labels) for human activities, EventNet [37] (95k videos and 500 labels) for event-specific concepts ...
doi:10.1109/iccvw.2017.48
dblp:conf/iccvw/FernandezVEMFWR17
fatcat:j4crsduvbjgrxe6rjztws7vsme
Formal representation of events in a surveillance domain ontology
2016
2016 IEEE International Conference on Image Processing (ICIP)
In this paper, we present an extensive ontology framework for representing complex semantic events. ...
The explicit definition of event vocabulary presented in the paper is aimed at aiding forensic analysts to objectively identify and represent complex events. ...
In an effort to develop an open and expandable video analysis framework equipped with tools for analysing, recognising, extracting and classifying events in video, which can be used for searching during ...
doi:10.1109/icip.2016.7532490
dblp:conf/icip/SobhaniCZI16
fatcat:rqlmcdv4f5gjlc3qbd7tkrgb6m
Is Listening Comprehension a Comprehensible Input for L2 Vocabulary Acquisition?
2019
International Journal of English Linguistics
Therefore, it could be retrieved more easily than vocabulary from reading comprehension input. Recommendations and suggestions for future research have been given at the end of the article. ...
We search for the terms “vocabulary learning”, “vocabulary acquisition”, and “listening comprehension” in several international databases to elicit target studies. ...
It can be used to represent our knowledge about all concepts: those underlying objects, situations, events, sequences of events, actions and sequences of actions" (p. 34). ...
doi:10.5539/ijel.v9n6p77
fatcat:mxqd6byj7jahvddwc4aujetrve
Recent Advances in Zero-shot Recognition
[article]
2017
arXiv
pre-print
However, to scale the recognition to a large number of classes with few or now training samples for each class remains an unsolved problem. ...
We also overview related recognition tasks including one-shot and open set recognition which can be used as natural extensions of zero-shot recognition when limited number of class samples become available ...
Yanwei Fu is supported by The Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning. ...
arXiv:1710.04837v1
fatcat:u3mp6dgj2rgqrarjm4dcywegmy
LIVE: An Integrated Production and Feedback System for Intelligent and Interactive TV Broadcasting
2011
IEEE transactions on broadcasting
In this paper, we report recent research activities under the integrated project, Live Staging of Media Events (LIVE), which is funded under European Framework-6 programme, and illustrate how a new LIVE ...
TV broadcasting and content production concept can be introduced to improve the existing TV broadcasting services. ...
The former focuses on extraction of several events and semantic concepts for content indexing and automatic annotation; while the latter uses these indexed semantics for video retrieval and delivery. ...
doi:10.1109/tbc.2011.2158252
fatcat:e2vxeyvpqbdolphla3pwksakfy
Toward speech as a knowledge resource
2001
IBM Systems Journal
Nevertheless, the potential reward for solving this problem drives us to pursue it. ...
In this paper we advocate the study of speech as a knowledge resource, provide a brief introduction to the state of the art in speech recognition, describe a number of systems that use speech recognition ...
for a video mail application based on word-spotting using a 35-word indexing vocabulary chosen a priori for the specific domain. ...
doi:10.1147/sj.404.0985
fatcat:obidxaf6gbazhgoxabx4njjfgq
Future Vision of Interactive and Intelligent TV Systems using Edge AI
2020
SET INTERNATIONAL JOURNAL OF BROADCAST ENGINEERING
Edge AI means use in-device capabilities to run AI applications instead of running them in cloud. ...
One of the new features to be addressed by this future application layer is the use of Artificial Intelligence technologies. ...
By using a concept-aware application layer, TV applications can use the semantics of video content as an anchor to trigger events. ...
doi:10.18580/setijbe.2020.4
fatcat:crbczniv3zaypmnfc7r7jntpsu
Event detection and recognition for semantic annotation of video
2010
Multimedia tools and applications
Research on methods for detection and recognition of events and actions in videos is receiving an increasing attention from the scientific community, because of its relevance for many applications, from ...
Event detection and recognition requires to consider the temporal aspect of video, either at the low-level with appropriate features, or at a higher-level with models and classifiers than can represent ...
LSCOM initiative [71] -that has created a specialised vocabulary for news video. ...
doi:10.1007/s11042-010-0643-7
fatcat:7x6x4r3n6rhnnofpqhbelgzc6y
« Previous
Showing results 1 — 15 out of 15,999 results