Filters








3,019 Hits in 8.7 sec

Content-based video indexing for sports applications using integrated multi-modal approach

Dian Tjondronegoro, Yi-Ping Phoebe Chen, Binh Pham
2005 Proceedings of the 13th annual ACM international conference on Multimedia - MULTIMEDIA '05  
This doctoral consists of a research work based on an integrated multi-modal approach for sports video indexing and retrieval.  ...  To sustain an ongoing rapid growth of video information, there is an emerging demand for a sophisticated content-based video indexing system.  ...  Sports video is selected as the primary domain due to its content richness and popularity.  ... 
doi:10.1145/1101149.1101362 dblp:conf/mm/TjondronegoroCP05 fatcat:r7flpxbh4fbfrbtnepauzo26du

Teaching Machines to Understand Baseball Games: Large-Scale Baseball Video Database for Multiple Video Understanding Tasks [chapter]

Minho Shim, Young Hwi Kim, Kyungmin Kim, Seon Joo Kim
2018 Lecture Notes in Computer Science  
A major obstacle in teaching machines to understand videos is the lack of training data, as creating temporal annotations for long videos requires a huge amount of human effort.  ...  To this end, we introduce a new large-scale baseball video dataset called the BBDB, which is produced semi-automatically by using play-by-play texts available online.  ...  On average, annotating 3 hours of video takes around 4 hours while our semi-automatic method takes about 5 minutes per game.  ... 
doi:10.1007/978-3-030-01267-0_25 fatcat:3oqweqmqvvacdjnsprxnkec4hm

Revealing the Invisible: Visual Analytics and Explanatory Storytelling for Advanced Team Sport Analysis

Manuel Stein, Thorsten Breitkreutz, Johannes Haussler, Daniel Seebacher, Christoph Niederberger, Tobias Schreck, Michael Grossniklaus, Daniel Keim, Halldor Janetzko
2018 2018 International Symposium on Big Data Visual and Immersive Analytics (BDVA)  
In this paper, we propose a four-step analytics conceptual workflow for an automatic selection of appropriate views for key situations in soccer games.  ...  Identifying suitable visualizations for a specific situation is key to a successful analysis.  ...  ACKNOWLEDGMENT We thank our domain experts for their valuable feedback and discussions.  ... 
doi:10.1109/bdva.2018.8534022 dblp:conf/bdva/SteinBHSNSGKJ18 fatcat:m3dicdn6zzch5f2mcqjvnndl44

Sports Video Analysis: Semantics Extraction, Editorial Content Creation and Adaptation

Changsheng Xu, Jian Cheng, Yi Zhang, Yifan Zhang, Hanqing Lu
2009 Journal of Multimedia  
We first propose a generic multi-layer and multi-modal framework for sports video analysis.  ...  Sports video analysis has been a hot research area and a number of potential applications have been identified.  ...  Various approaches and prototypes have been proposed and developed to automatically or semi-automatically analyze sports video content, extract semantic events or highlights, intelligently adapt, enhance  ... 
doi:10.4304/jmm.4.2.69-79 fatcat:xytusontr5cyxlxpyqgljnhkqu

Semantic annotation of sports videos

J. Assfalg, M. Bertini, C. Colombo, A.D. Bimbo
2002 IEEE Multimedia  
Acknowledgments This work was partially supported by the ASSAVID EU Project (Automatic Segmentation and Semantic Annotation of Sports Videos, http://www.bpe-rnd.co.uk/assavid/) under contract IST-13082  ...  The consortium comprises ACS SpA, Italy; BBC R&D, UK; Institut Dalle Molle D'Intelligence Artificielle Perceptive (Dalle Molle Institute for Perceptual Artifical Intelligence), Switzerland; Sony BPE, UK  ...  (For example, Miyamori and Iisaku 4 proposed a method for annotating videos according to human behavior; Ariki and Sygiyama 5 proposed a method for classifying TV sports news videos using discrete cosine  ... 
doi:10.1109/93.998060 fatcat:mlkgado5hnenjolsj3syib5wce

Indirect Match Highlights Detection with Deep Convolutional Neural Networks [article]

Marco Godi, Paolo Rota, Francesco Setti
2017 arXiv   pre-print
Highlights in a sport video are usually referred as actions that stimulate excitement or attract attention of the audience.  ...  A big effort is spent in designing techniques which find automatically highlights, in order to automatize the otherwise manual editing process.  ...  [15] presents a method that uses audio signals to build video highlights for baseball games.  ... 
arXiv:1710.00568v1 fatcat:zip2quma6feafmws4usrpsuylq

A Generic Framework for Video Annotation via Semi-Supervised Learning

Tianzhu Zhang, Changsheng Xu, Guangyu Zhu, Si Liu, Hanqing Lu
2012 IEEE transactions on multimedia  
In this paper, we propose a novel approach based on semi-supervised learning by means of information from the Internet for interesting event annotation in videos.  ...  Concretely, a Fast Graph-based Semi-Supervised Multiple Instance Learning (FGSSMIL) algorithm, which aims to simultaneously tackle these difficulties in a generic framework for various video domains (e.g  ...  A Generic Framework for Video Annotation via Semi-Supervised Learning I.  ... 
doi:10.1109/tmm.2012.2191944 fatcat:7uaujwzq4nfrto5jaim7bf4ify

Sports video summarization using highlights and play-breaks

Dian Tjondronegoro, Yi-Ping Phoebe Chen, Binh Pham
2003 Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval - MIR '03  
To manage the massive growth of sport videos, we need to summarize the contents into a more compact and interesting representation.  ...  However, due to the amount of noises in sport audio, fast text-display detection will be used for verification of the detected highlights.  ...  For semi-automatic construction of the sports video summary, we have developed efficient detection algorithms of whistle, excitement and text which are reliable and precise (despite its simplicity and  ... 
doi:10.1145/973264.973296 dblp:conf/mir/TjondronegoroCP03 fatcat:hxwv4gfrxbfvzbtadyyoerz2am

Detecting complex events in user-generated video using concept classifiers

Jinlin Guo, David Scott, Frank Hopfgartner, Cathal Gurrin
2012 2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI)  
The method starts from manually selecting a variety of relevant concepts, followed by constructing classifiers for these concepts.  ...  Automatic detection of complex events in user-generated videos (UGV) is a challenging task due to its new characteristics differing from broadcast video.  ...  Moreover, many thanks to the HMA group 6 for their help in the annotation effort.  ... 
doi:10.1109/cbmi.2012.6269799 dblp:conf/cbmi/GuoSHG12 fatcat:utulny247zgftjvzntrxc7n3wq

Multi-level Semantic Analysis for Sports Video [chapter]

Dian W. Tjondronegoro, Yi-Ping Phoebe Chen
2005 Lecture Notes in Computer Science  
To sustain an ongoing rapid growth of sports video, there is an emerging demand for a sophisticated content-based indexing system.  ...  There has been a huge increase in the utilization of video as one of the most preferred type of media due to its content richness for many significant applications including sports.  ...  For example, a sport summary can be structured using an integrated highlights and play-breaks using some textalternative annotations [8] .  ... 
doi:10.1007/11552451_4 fatcat:tf44tus6h5fqhbumurxcq5sd5i

Dynamic pictorial ontologies for video digital libraries annotation

Marco Bertini, Alberto Del Bimbo, Carlo Torniai, Costantino Grana, Rita Cucchiara
2007 Workshop on multimedia information retrieval on The many faces of multimedia semantics - MS '07  
Motivation for this new ontology paradigm are discussed together with a proposal of a framework for ontology creation, maintenance, and automatic annotation of video.  ...  In this paper, we present the dynamic pictorial ontology paradigm for video annotation.  ...  Visual features that will be used for the creation and maintenance of the ontology and for the automatic annotation of videos must be computed from each shot.  ... 
doi:10.1145/1290067.1290076 dblp:conf/mm/BertiniBTGC07 fatcat:rgxi3a43yrdx3h6g3zprcj5tfq

Learning to Learn from Noisy Web Videos

Serena Yeung, Vignesh Ramanathan, Olga Russakovsky, Liyue Shen, Greg Mori, Li Fei-Fei
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Our method uses Q-learning to learn a data labeling policy on a small labeled training dataset, and then uses this to automatically label noisy web data for new visual concepts.  ...  In this work, we instead propose a reinforcement learning-based formulation for selecting the right examples for training a classifier from noisy web search results.  ...  Acknowledgments Our work is supported by an ONR MURI grant and a hardware donation from NVIDIA.  ... 
doi:10.1109/cvpr.2017.788 dblp:conf/cvpr/YeungRRSMF17 fatcat:zlwutlx7bvcjlniwbq4lmiykxm

Learning to Learn from Noisy Web Videos [article]

Serena Yeung, Vignesh Ramanathan, Olga Russakovsky, Liyue Shen, Greg Mori, Li Fei-Fei
2017 arXiv   pre-print
Our method uses Q-learning to learn a data labeling policy on a small labeled training dataset, and then uses this to automatically label noisy web data for new visual concepts.  ...  In this work, we instead propose a reinforcement learning-based formulation for selecting the right examples for training a classifier from noisy web search results.  ...  Acknowledgments Our work is supported by an ONR MURI grant and a hardware donation from NVIDIA.  ... 
arXiv:1706.02884v1 fatcat:lfbm64o6kfgktpucbtvgzrtxvm

A Novel Method for Super Imposed Text Extraction in a Sports Video

V. Vijayakumar, R. Nedunchezhian
2011 International Journal of Computer Applications  
This paper provides a novel method of detecting video text regions containing player information and score in sports videos.  ...  Text data present in video contain useful information for automatic annotation, structuring, mining, indexing and retrieval of video.  ...  Generating captions or annotations automatically for video is a challenging task. It enables text-based querying and content summarization.  ... 
doi:10.5120/1915-2553 fatcat:oupwr7asazh3ljj4ibyj42gntq

MediaDiver

Gregor Miller, Sidney Fels, Abir Al Hajri, Michael Ilich, Zoltan Foley-Fisher, Manuel Fernandez, Daesik Jang
2011 Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems - CHI EA '11  
Our proposal is a demonstration of the technology required to meet these expectations for video.  ...  to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions.  ...  An example of a multi-video space is a broadcast sporting events, where a broadcast manager can select their own views based on personal goals.  ... 
doi:10.1145/1979742.1979711 dblp:conf/chi/MillerFHIFFJ11 fatcat:lo35ov33nvgftiuwwxjlyskanm
« Previous Showing results 1 — 15 out of 3,019 results