Filters








89 Hits in 3.9 sec

Generic play-break event detection for summarization and hierarchical sports video analysis

A. Ekin, M. Tekalp
2003 2003 International Conference on Multimedia and Expo. ICME '03. Proceedings (Cat. No.03TH8698)  
The proposed algorithm only uses shot-based generic cinematic features, such as shot type and shot length.  ...  We demonstrate the genericity of the proposed play-break detection algorithm over football, tennis, basketball video and the effectiveness of the proposed soccer goal detection algorithm over a large data  ...  In contrast, although object-based features, such as player trajectories and their motion characteristics, enable higher-level video analysis, they are usually domain(sports type)-specific and their extraction  ... 
doi:10.1109/icme.2003.1220881 dblp:conf/icmcs/EkinT03 fatcat:rv5beszs2va2pdcu26r2cttoeu

Batteries, camera, action! Learning a semantic control space for expressive robot cinematography [article]

Rogerio Bonatti, Arthur Bucker, Sebastian Scherer, Mustafa Mukadam, Jessica Hodgins
2021 arXiv   pre-print
Next, we analyze correlations between descriptors and build a semantic control space based on cinematography guidelines and human perception studies.  ...  We also show that our models generalize to different scenes in both simulation and real-world experiments. Data and video found at: https://sites.google.com/view/robotcam.  ...  The authors also thank Deepak Gopinath and Jack Urbanek for the help with the crowd-sourcing platform.  ... 
arXiv:2011.10118v2 fatcat:jmf2t7kbqfd77dklau3m3pssja

Pano2Vid: Automatic Cinematography for Watching 360^∘ Videos [article]

Yu-Chuan Su, Dinesh Jayaraman, Kristen Grauman
2016 arXiv   pre-print
We introduce the novel task of Pano2Vid - automatic cinematography in panoramic 360^∘ videos.  ...  Through experimental evaluation on multiple newly defined Pano2Vid performance measures against several baselines, we show that our method successfully produces informative videos that could conceivably  ...  This research is supported in part by NSF IIS -1514118 and a gift from Intel. We also gratefully acknowledge the support of Texas Advanced Computing Center (TACC).  ... 
arXiv:1612.02335v1 fatcat:typknj7ttvaupgndqowht6gqoi

Gradient-based 2D-to-3D Conversion for Soccer Videos

Kiana Calagari, Mohamed Elgharib, Piotr Didyk, Alexandre Kaspar, Wojciech Matusik, Mohamed Hefeeda
2015 Proceedings of the 23rd ACM international conference on Multimedia - MM '15  
We address this problem by showing how to construct a high-quality, domain-specific conversion method for soccer videos.  ...  We validate our method by conducting user-studies that evaluate depth perception and visual comfort of the converted 3D videos.  ...  [26] propose a semi-automatic 2D-to-3D conversion system based on multiple depth cues including motion and defocus.  ... 
doi:10.1145/2733373.2806262 dblp:conf/mm/CalagariEDKMH15 fatcat:gosmwahnnncjjf6b5643adm3em

Deep 360 Pilot: Learning a Deep Agent for Piloting through 360 Sports Video [article]

Hou-Ning Hu, Yen-Chen Lin, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju Chang, Min Sun
2017 arXiv   pre-print
To relieve the viewer from this "360 piloting" task, we propose "deep 360 pilot" -- a deep learning-based agent for piloting through 360 sports videos automatically.  ...  We train domain-specific agents and achieve the best performance on viewing angle selection accuracy and transition smoothness compared to [51] and other baselines.  ...  Acknowledgements We thank NOVATEK, MEDIATEK and NVIDIA for their support.  ... 
arXiv:1705.01759v1 fatcat:5d2lg55slnadli3kh64zw7zqcm

Shot Classification of Field Sports Videos Using AlexNet Convolutional Neural Network

Rabia A. Minhas, Ali Javed, Aun Irtaza, Muhammad Tariq Mahmood, Young Bok Joo
2019 Applied Sciences  
Through the response normalization and the dropout layers on the feature maps we boosted the overall training and validation performance evaluated over a diverse dataset of cricket and soccer videos.  ...  Therefore, in this research work, we propose an effective shot classification method based on AlexNet Convolutional Neural Networks (AlexNet CNN) for field sports videos.  ...  Raventos et al. [33] proposed a video summarization method for soccer based on audio-visual features.  ... 
doi:10.3390/app9030483 fatcat:wsmre3532nhyjp4xt4s4gpjbmq

CAMHID: Camera Motion Histogram Descriptor and Its Application to Cinematographic Shot Classification

Muhammad Abul Hasan, Min Xu, Xiangjian He, Changsheng Xu
2014 IEEE transactions on circuits and systems for video technology (Print)  
Index Terms-video shot classification, motion analysis, singular value decomposition. 1051-8215 (c)  ...  Firstly, the proposed camera motion descriptors for video shots classification are computed on a video dataset consisting of regular camera motion patterns (e.g., pan, zoom, tilt, static).  ...  Template based camera motion detection was used in [22] , [23] . Lan et al. proposed a framework for home video camera motion analysis in [23] .  ... 
doi:10.1109/tcsvt.2014.2345933 fatcat:3662vrk3ffgtxiiwkrd2m5deva

Hyper 360—Towards a Unified Tool Set Supporting Next Generation VR Film and TV Productions

Barnabas Takacs, Zsuzsanna Vincze, Hannes Fassold, Antonis Karakottas, Nikolaos Zioulis, Dimitrios Zarpalas, Petros Daras
2019 Journal of Software Engineering and Applications  
and 360˚ depth estimation) and describe an integrated solution, called Hyper 360, to address them.  ...  We describe four fundamental challenges that complex real-life Virtual Reality (VR) productions are facing today (such as multi-camera management, quality control, automatic annotation with cinematography  ...  Acknowledgements This work has received funding from the European Union's Horizon 2020 research and innovation programme, grant n˚ 761934, Hyper 360 ("Enriching 360 media with 3D storytelling and personalisation  ... 
doi:10.4236/jsea.2019.125009 fatcat:t44vzsg5oreyvmdrenhreiu26i

Enhancing Audiovisual Experience with Haptic Feedback: A Survey on HAV

Fabien Danieau, Anatole Lecuyer, Philippe Guillotel, Julien Fleureau, Nicolas Mollet, Marc Christie
2013 IEEE Transactions on Haptics  
By building on existing technologies and tackling the specific challenges of the enhancement of audiovisual experience with haptics, we believe the field presents exciting research perspectives whose financial  ...  Haptic technology has been widely employed in applications ranging from teleoperation and medical simulation to art and design, including entertainment, flight simulation and virtual reality.  ...  However the main focus was on how to render the effects rather than the video analysis.  ... 
doi:10.1109/toh.2012.70 pmid:24808303 fatcat:hnhsdbvqpvgizo6w6ldt74jjre

Deep Unsupervised Multi-View Detection of Video Game Stream Highlights [article]

Charles Ringer, Mihalis A. Nicolaou
2018 arXiv   pre-print
., score change, end-game), while most advanced tools and techniques are based on detection of highlights via visual analysis of game footage.  ...  We consider the problem of automatic highlight-detection in video game streams.  ...  Nguyen and Yoshitaka [20] adopt a cinematography and motion based approach, whereby they analysed the type of camera shots used in order to detect highlights, especially emotional events.  ... 
arXiv:1807.09715v1 fatcat:nddyimoffncdfmoprhwyn2nhuy

Deep unsupervised multi-view detection of video game stream highlights

Charles Ringer, Mihalis A. Nicolaou
2018 Proceedings of the 13th International Conference on the Foundations of Digital Games - FDG '18  
., score change, end-game), while most advanced tools and techniques are based on detection of highlights via visual analysis of game footage.  ...  We consider the problem of automatic highlight-detection in video game streams.  ...  Sun et al. in [29] analysed the excitement [20] adopt a cinematography and motion based approach, whereby they analysed the type of camera shots used in order to detect highlights, especially emotional  ... 
doi:10.1145/3235765.3235781 dblp:conf/fdg/RingerN18 fatcat:of3vf4hejjhdxe5nlfs3bvg2gi

Taxonomy of Directing Semantics for Film Shot Classification

Hee Lin Wang, Loong-Fah Cheong
2009 IEEE transactions on circuits and systems for video technology (Print)  
based on edge occlusion reasoning.  ...  These motionrelated semantics are grounded upon cinematography and are thus more appealing to users.  ...  Without the knowledge of cinematography, a movie may be blindly summarized by maximizing its visual entropy.  ... 
doi:10.1109/tcsvt.2009.2022705 fatcat:nfunz7gqkvfybiuzwn5kcvq2ke

Thermal and illumination effects on a PbI2 nanoplate and its transformation to CH3NH3PbI3 perovskite

Jiming Wang, Dongxu Lin, Tiankai Zhang, Mingzhu Long, Tingting Shi, Ke Chen, Zhihong Liang, Jianbin Xu, Weiguang Xie, Pengyi Liu
2019 CrysteEngComm  
and mechanism are discussed.  ...  The vapor transformation of crystalline PbI2 nanoplates into CH3NH3PbI3 under annealing and illumination condition was systematically investigated in nanoscale, and the detail pathway of structural transformation  ...  Acknowledgments: The authors wish to acknowledge the support from Gaojun Ren, Yubin Wang, and Dong Zhang of Tianjin University, who helped with the data collection.  ... 
doi:10.1039/c8ce02048e fatcat:epqanoejpffvldgqgszfxfgfke

Motion analysis systems as optimization training tools in combat sports and martial arts

Ewa Polak, Jerzy Kulasa, António VencesBrito, Maria António Castro, Orlando Fernandes
2016 Revista de Artes Marciales Asiáticas  
Scientific studies conducted so far showed the usefulness of video-based, optical and electromechanical systems.  ...  The presentation and discussion takes place in the following sections: motion analysis utility for combat sports and martial arts, systems using digital video and systems using markers, sensors or transmitters  ...  The analysis of the technique of motion which is registered on a video film (called cinematography analysis or videography) might be a qualitative as well as quantitative process.  ... 
doi:10.18002/rama.v10i2.1687 fatcat:j436tbzonbfvvdcpkgob2inyh4

Multimodal extraction of events and of information about the recording activity in user generated videos

Francesco Cricri, Kostadin Dabov, Igor D. D. Curcio, Sujeet Mate, Moncef Gabbouj
2012 Multimedia tools and applications  
For this kind of scenarios we jointly analyze these multiple video recordings and their associated sensor modalities in order to extract higher-level semantics of the recorded media: based on the orientation  ...  We show that the proposed multimodal analysis methods perform well on various recordings obtained in real live music performances.  ...  These methods are all based on video-content analysis.  ... 
doi:10.1007/s11042-012-1085-1 fatcat:4vzmycrhyzarvbzjwql3t4tr7i
« Previous Showing results 1 — 15 out of 89 results