Filters








44,077 Hits in 3.7 sec

Exploring temporal consistency for video analysis and retrieval

Jun Yang, Alexander G. Hauptmann
2006 Proceedings of the 8th ACM international workshop on Multimedia information retrieval - MIR '06  
This paper presents a thorough study of temporal consistency defined with respect to semantic concepts and query topics using quantitative measures, and discusses its implications to video analysis and  ...  Temporal consistency is ubiquitous in video data, where temporally adjacent video shots usually share similar visual and semantic content.  ...  Temporal consistency provides valuable contextual clues to video analysis and retrieval tasks.  ... 
doi:10.1145/1178677.1178685 dblp:conf/mir/YangH06 fatcat:gqb5jtdlyvd3dkveiwpsjojoii

Hybrid Spatiotemporal Contrastive Representation Learning for Content-Based Surgical Video Retrieval

Vidit Kumar, Vikas Tripathi, Bhaskar Pant, Sultan S. Alshamrani, Ankur Dumka, Anita Gehlot, Rajesh Singh, Mamoon Rashid, Abdullah Alshehri, Ahmed Saeed AlGhamdi
2022 Electronics  
In this regard, previous methods for surgical video retrieval are based on handcrafted features which do not represent the video effectively.  ...  These types of surgeries are often recorded during operations, and these recordings have become a key resource for education, patient disease analysis, surgical error analysis, and surgical skill assessment  ...  Therefore, there is a high requirement for automated content-based surgical video analysis methods for both analysis and searching for desired videos.  ... 
doi:10.3390/electronics11091353 fatcat:zmnshgwnhfg3boritopyouw4jm

Measuring the impact of temporal context on video retrieval

Daragh Byrne, Peter Wilkins, Gareth J.F. Jones, Alan F. Smeaton, Noel E. O'Connor
2008 Proceedings of the 2008 international conference on Content-based image and video retrieval - CIVR '08  
In video retrieval these cues often include temporal information such as a shot's location within the overall video broadcast and/or its neighbouring shots.  ...  In this paper we describe the findings from the K-Space interactive video search experiments in TRECVid 2007, which examined the effects of including temporal context in video retrieval.  ...  ACKNOWLEDGMENTS This work is supported by the European Commission under contract FP6-027026 (K-Space), the Irish Research Council for Science Engineering and Technology and by Science Foundation Ireland  ... 
doi:10.1145/1386352.1386393 dblp:conf/civr/ByrneWJSO08 fatcat:gy3jkax2kbeffhgagu7rihqx3e

An Exploratory Analysis Tool for a Long-Term Video from a Stationary Camera

R. Nogami, B. Shizuki, H. Hosobe, J. Tanaka
2012 2012 IEEE 24th International Conference on Tools with Artificial Intelligence  
The tool consists of three key methods: spatial change visualization, temporal change visualization, and similarity-based video retrieval.  ...  We present an interactive tool for the exploratory analysis of a long-term video from a stationary camera.  ...  Our tool provides three key methods for exploratory video analysis, namely, spatial change visualization, temporal change visualization, and similarity-based video retrieval.  ... 
doi:10.1109/ictai.2012.185 dblp:conf/ictai/NogamiSHT12 fatcat:7u4xal5uavhadcnaop46qlf5yu

Guest Editorial: Spatio-temporal Feature Learning for Unconstrained Video Analysis

Yahong Han, Liqiang Nie, Fei Wu
2018 Multimedia tools and applications  
This has encouraged the research of video analysis, which can boost the development of techniques for the management and applications of videos, such as video retrieval, video classification, action recognition  ...  As the visual content and temporal consistency of unconstrained videos are more complex, there are still challenges in video analysis and practical applications.  ...  Acknowledgments This work is supported by the NSFC (under Grant U1509206,61472276) and Tianjin Natural Science Foundation (no. 15JCYBJC15400).  ... 
doi:10.1007/s11042-018-6341-6 fatcat:wsp2hi2gyfgkra6yr2pvo2wbgy

On clustering and retrieval of video shots through temporal slices analysis

Chong-Wah Ngo, Ting-Chuen Pong, Hong-Jiang Zhang
2002 IEEE transactions on multimedia  
Based on the analysis of temporal slices, we propose novel approaches for clustering and retrieval of video shots.  ...  In this paper, we first demonstrate that tensor histogram features extracted from temporal slices are suitable for motion retrieval.  ...  Discussion Three new temporal texture features based on the analysis of temporal slices have been presented and applied to motion retrieval in sport video databases.  ... 
doi:10.1109/tmm.2002.802022 fatcat:zvuowrweuncklmdybfx5djog6u

Guest Editorial: Special Issue on Recent Advances in Content Analysis for Media Computing

Kim-Hui Yap, Lap-Pui Chau, Kap-Luk Chan
2009 Journal of Signal Processing Systems  
As opposed to the image data, videos contain temporal information and they consist of audio and visual components.  ...  Together, they explore how effective audio and visual features can be combined to represent a video and how user interaction can be integrated to improve the performance of video search.  ...  As opposed to the image data, videos contain temporal information and they consist of audio and visual components.  ... 
doi:10.1007/s11265-009-0433-5 fatcat:snvjjbrtcfad3kn6psioj4b3fa

Exploring the Temporal Cues to Enhance Video Retrieval on Standardized CDVA

Won Jo, Guentaek Lim, Joonsoo Kim, Joungil Yun, Yukyung Choi
2022 IEEE Access  
As the demand for large-scale video analysis increases, video retrieval research is also becoming more active.  ...  In 2014, ISO/IEC MPEG began standardizing compact descriptors for video analysis, known as CDVA, and it is now adopted as a standard.  ...  To this end, the moving picture experts group (MPEG) has performed large-scale video analysis through standardization of compact descriptors for video analysis (CDVA) [4] , and this approach has been  ... 
doi:10.1109/access.2022.3165177 fatcat:wxsq42u46vdsrjez5d4kuzgcxm

Learning Temporal Embeddings for Complex Video Analysis

Vignesh Ramanathan, Kevin Tang, Greg Mori, Li Fei-Fei
2015 2015 IEEE International Conference on Computer Vision (ICCV)  
We evaluate various design decisions for learning temporal embeddings, and show that our embeddings can improve performance for multiple video tasks such as retrieval, classification, and temporal order  ...  In this paper, we propose to learn temporal embeddings of video frames for complex video analysis. Large quantities of unlabeled video data can be easily obtained from the Internet.  ...  Karpathy and S. Yeung for helpful comments. This research is partially supported by grants from ONR MURI and Intel ISTC-PC.  ... 
doi:10.1109/iccv.2015.508 dblp:conf/iccv/RamanathanTML15 fatcat:q7xxusqqqjaxhdhxnfbwvgxmue

Learning Unsupervised Visual Representations using 3D Convolutional Autoencoder with Temporal Contrastive Modeling for Video Retrieval

Vidit Kumar, Vikas Tripathi, Bhaskar Pant
2022 International journal of mathematical, engineering and management sciences  
The rapid growth of tag-free user-generated videos (on the Internet), surgical recorded videos, and surveillance videos has necessitated the need for effective content-based video retrieval systems.  ...  Earlier methods for video representations are based on hand-crafted, which hardly performed well on the video retrieval tasks.  ...  Acknowledgments This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.  ... 
doi:10.33889/ijmems.2022.7.2.018 fatcat:fz7ola4jmneq5j2j74rfzwdi2y

Retrieval Augmented Convolutional Encoder-Decoder Networks for Video Captioning

Jingwen Chen, Yingwei Pan, Yehao Li, Ting Yao, Hongyang Chao, Tao Mei
2022 ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)  
Specifically, for each query video, a video-sentence retrieval model is first utilized to fetch semantically relevant sentences from the training sentence pool, coupled with the corresponding training  ...  In this paper, we uniquely introduce a Retrieval Augmentation Mechanism (RAM) that enables the explicit reference to existing video-sentence pairs within any encoder-decoder captioning model.  ...  trained with a semantic loss for cross-modal consistency between video and caption.  ... 
doi:10.1145/3539225 fatcat:na34xvi25bcnfes7p43kdaqjge

Video browsing interfaces and applications: a review

Klaus Schoeffmann
2010 Journal of Photonics for Energy  
., storage, retrieval, sharing) employing video data in the past decade, both for personal and professional use.  ...  Our survey reviews more than 40 different video browsing and retrieval interfaces and classifies them into three groups: applications that use video player-like interaction, video retrieval applications  ...  For news videos most approaches use text recognition and a few apply face detection. In contrast for sports motion and speech analysis is typically used.  ... 
doi:10.1117/6.0000005 fatcat:xeanz3f6pnaizmtno4mumffhoq

Semantic-Based Video Retrieval Survey

Shaimaa Toriah Mohamed Toriah, Atef Zaki Ghalwash, Aliaa A. A. Youssif
2018 Journal of Computer and Communications  
Digital data include image, text, and video. Video represents a rich source of information. Thus, there is an urgent need to retrieve, organize, and automate videos.  ...  In this paper, the different approaches of video retrieval are outlined and briefly categorized.  ...  In addition, the content-based video retrieval methods are divided into video segmentation and video feature analysis and extraction methods.  ... 
doi:10.4236/jcc.2018.68003 fatcat:qfep2py7ufhwxea7vazpujltja

All in One: Exploring Unified Video-Language Pre-training [article]

Alex Jinpeng Wang, Yixiao Ge, Rui Yan, Yuying Ge, Xudong Lin, Guanyu Cai, Jianping Wu, Ying Shan, Xiaohu Qie, Mike Zheng Shou
2022 arXiv   pre-print
Mainstream Video-Language Pre-training models consist of three parts, a video encoder, a text encoder, and a video-text fusion Transformer.  ...  Our pre-trained all-in-one Transformer is transferred to various downstream video-text tasks after fine-tuning, including text-video retrieval, video-question answering, multiple choice and visual commonsense  ...  We would like to thank David Junhao Zhang for his kindly help on Transformer training.  ... 
arXiv:2203.07303v1 fatcat:ypguqswusnhf5nxqoiiq275vga

Exploiting redundancy in cross-channel video retrieval

Bouke Huurnink, Maarten de Rijke
2007 Proceedings of the international workshop on Workshop on multimedia information retrieval - MIR '07  
We describe this phenomenon, and use it to develop a framework to incorporate redundancy for cross-channel retrieval of visual items using speech.  ...  Video producers, in telling a news story, tend to repeat important visual and speech material multiple times in adjacent shots, thus creating a certain level of redundancy.  ...  ACKNOWLEDGMENTS The authors would like to thank Jan-Mark Geusebroek for his valuable assistance in modelling the various distributions outlined in this paper.  ... 
doi:10.1145/1290082.1290109 dblp:conf/mir/HuurninkR07 fatcat:mmz7ud4eyzcb7ki34mdnlzqstm
« Previous Showing results 1 — 15 out of 44,077 results