Filters








1,459 Hits in 4.0 sec

R-Clustering for Egocentric Video Segmentation [article]

Estefania Talavera, Mariella Dimiccoli, Marc Bolaños, Maedeh Aghaei,, Petia Radeva
2017 arXiv   pre-print
ADWIN serves as a statistical upper bound for the clustering-based video segmentation.  ...  In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energy-minimization  ...  The R-Clustering Approach for Temporal Video Segmentation Due to the low-temporal resolution of egocentric videos, as well as to the camera wearer's motion, temporally adjacent egocentric images may be  ... 
arXiv:1704.02809v1 fatcat:bvy5kmb2erbspmwun2xjmjaghq

Towards Unsupervised Familiar Scene Recognition in Egocentric Videos [article]

Estefania Talavera, Nicolai Petkov, Petia Radeva
2019 arXiv   pre-print
We present a new method for familiar scene recognition in egocentric videos, based on background pattern detection through automatically configurable COSFIRE filters.  ...  We present some experiments over egocentric data acquired with the Narrative Clip.  ...  R-Clustering: Temporal Video Segmentation Given the problem of temporal segmentation of egocentric videos, we apply R-Clustering, introduced in reference [12] .  ... 
arXiv:1905.04093v1 fatcat:wwej2r6xp5c2hc2oqngq77a4qe

Unsupervised Learning of Deep Feature Representation for Clustering Egocentric Actions

Bharat Lal Bhatnagar, Suriya Singh, Chetan Arora, C.V. Jawahar
2017 Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence  
This motivates the use of unsupervised methods for egocentric video analysis. In this work, we propose a robust and generic unsupervised approach for first person action clustering.  ...  We demonstrate that clustering of such features leads to the discovery of semantically meaningful actions present in the video.  ...  They use hand-crafted global motion features to cluster the video segments.  ... 
doi:10.24963/ijcai.2017/200 dblp:conf/ijcai/BhatnagarSAJ17 fatcat:5bvryjxvt5fzfklloknkptsncy

Discovering important people and objects for egocentric video summarization

Yong Jae Lee, J. Ghosh, K. Grauman
2012 2012 IEEE Conference on Computer Vision and Pattern Recognition  
We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day.  ...  Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.  ...  Acknowledgements Many thanks to Yaewon, Adriana, Nona, Lucy, and Jared for collecting data. This research was sponsored in part by ONR YIP and DARPA CSSG.  ... 
doi:10.1109/cvpr.2012.6247820 dblp:conf/cvpr/LeeGG12 fatcat:p44k5wgxtnamfktrpgpklch7wu

Organizing Egocentric Videos for Daily Living Monitoring

Alessandro Ortis, Giovanni Maria Farinella, Valeria D'Amico, Luca Addesso, Giovanni Torrisi, Sebastiano Battiato
2016 Proceedings of the first Workshop on Lifelogging Tools and Applications - LTA '16  
By employing an unsupervised segmentation, each egocentric video is divided in chapters by considering the visual content.  ...  Egocentric videos are becoming popular since the possibility to observe the scene flow from the user's point of view (First Person Vision).  ...  CONCLUSIONS AND FUTURE WORKS This work propose a framework to segment and organize a set of egocentric videos for daily living monitoring.  ... 
doi:10.1145/2983576.2983578 fatcat:dcsnizp4lzfkzaxbkyecnrqwom

Predicting Important Objects for Egocentric Video Summarization

Yong Jae Lee, Kristen Grauman
2015 International Journal of Computer Vision  
We present a video summarization approach for egocentric or "wearable" camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day.  ...  Our results on two egocentric video datasets show the method's promise relative to existing techniques for saliency and summarization.  ...  Others use brain waves [31] , k-means clustering with temporal constraints [32] , or face detection [33] to segment egocentric videos.  ... 
doi:10.1007/s11263-014-0794-5 fatcat:6cbcxftomjalhpn3bn5lykuvv4

SR-clustering: Semantic regularized clustering for egocentric photo streams segmentation

Mariella Dimiccoli, Marc Bolaños, Estefania Talavera, Maedeh Aghaei, Stavri G. Nikolov, Petia Radeva
2017 Computer Vision and Image Understanding  
This paper addresses the problem of organizing egocentric photo streams acquired by a wearable camera into semantically meaningful segments.  ...  The resulting temporal segmentation is particularly suited for further analysis, ranging from activity and event recognition to semantic indexing and summarization.  ...  Temporal Segmentation The SR-clustering for temporal segmentation is based on fusing the semantic and contextual features with the R-Clustering method described in [9] .  ... 
doi:10.1016/j.cviu.2016.10.005 fatcat:okyxksx465hjlhvjxlgl2dko4y

Coarse-to-fine online learning for hand segmentation in egocentric video

Ying Zhao, Zhiwei Luo, Changqin Quan
2018 EURASIP Journal on Image and Video Processing  
She also works for Ricoh Software Research Center (Beijing) Co., Ltd., Beijing, China.  ...  ZL and CQ are responsible for the final proofreading along with the technical support. All authors have read and approved the final manuscript.  ...  To address this issue, we propose a method for unsupervised hand detection and segmentation in egocentric video.  ... 
doi:10.1186/s13640-018-0262-1 fatcat:4nzgfsommngi3c2kopspmuzoe4

Video summarization by learning submodular mixtures of objectives

Michael Gygli, Helmut Grabner, Luc Van Gool
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
It should thus be both, interesting and representative for the input video. Previous methods often used simplified assumptions and only optimized for one of these goals.  ...  We present a novel method for summarizing raw, casually captured videos. The objective is to create a short summary that still conveys the story.  ...  L r (x r , y) = i∈V min s∈y ||x r i − x r s || 2 2 , (7) where x r are the features used to represent a segment. Here, we use global image features averaged over the segment frames for x r .  ... 
doi:10.1109/cvpr.2015.7298928 dblp:conf/cvpr/GygliGG15 fatcat:wrq4beltcrhvnernoai45n43aa

Toward Storytelling From Visual Lifelogging: An Overview

Marc Bolanos, Mariella Dimiccoli, Petia Radeva
2017 IEEE Transactions on Human-Machine Systems  
The pictures taken offer considerable potential for knowledge mining concerning how people live their lives, hence, they open up new opportunities for many potential applications in fields including healthcare  ...  However, automatically building a story from a huge collection of unstructured egocentric data presents major challenges.  ...  That work, designed for egocentric photo streams, uses a graphcut algorithm to temporally segment the photo streams and includes an agglomerative clustering approach with concept drifting methodology,  ... 
doi:10.1109/thms.2016.2616296 fatcat:zbxjzfagjnhq3f2dej7poikf3m

A Semi-Automated Method for Object Segmentation in Infant's Egocentric Videos to Study Object Perception [article]

Qazaleh Mirsharif, Sidharth Sadani, Shishir Shah, Hanako Yoshida, Joseph Burling
2016 arXiv   pre-print
The evaluations demonstrate the high speed and accuracy of the presented method for object segmentation in voluminous egocentric videos.  ...  Object segmentation in infant's egocentric videos is a fundamental step in studying how children perceive objects in early stages of development.  ...  Conclusions We proposed a semi-automated method for object segmentation in egocentric videos.  ... 
arXiv:1602.02522v1 fatcat:3h3xwdi7n5f73ixh3mtxlb4qlm

Edited nearest neighbour for selecting keyframe summaries of egocentric videos

Ludmila I. Kuncheva, Paria Yousefi, Jurandy Almeida
2018 Journal of Visual Communication and Image Representation  
Edited nearest neighbour for selecting keyframe summaries of egocentric videos. Abstract A keyframe summary of a video must be concise, comprehensive and diverse.  ...  Current video summarisation methods may not be able to enforce diversity of the summary if the events have highly similar visual content, as is the case of egocentric videos.  ...  The regions for the egocentric video change the most, suggesting that GTS has a much stronger effect for this type of video.  ... 
doi:10.1016/j.jvcir.2018.02.010 fatcat:3lirgdpltvhurbrzw3htckejqq

Seeing Invisible Poses: Estimating 3D Body Pose from Egocentric Video

Hao Jiang, Kristen Grauman
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We propose to infer the "invisible pose" of a person behind the egocentric camera. Given a single video, our efficient learning-based approach returns the full body 3D joint positions for each frame.  ...  Our method outperforms an array of possible alternatives, including deep learning approaches for direct pose regression from images.  ...  For everyday movement, K = 300 is sufficient. Then we train a classifier to obtain the function g(v, c) to extract the probability of video segment v matching the pose cluster c.  ... 
doi:10.1109/cvpr.2017.373 dblp:conf/cvpr/JiangG17a fatcat:k26innkp7zbsbkqrnn3eysv7zi

Summarization of Egocentric Videos: A Comprehensive Survey

Ana Garcia del Molino, Cheston Tan, Joo-Hwee Lim, Ah-Hwee Tan
2016 IEEE Transactions on Human-Machine Systems  
This increasing flow of first-person video has led to a growing need for automatic video-summarization adapted to the characteristics and applications of egocentric video.  ...  Next, we describe the existing egocentric video datasets suitable for summarization, and then the various evaluation methods.  ...  As well as segmenting the video deterministically, set to a specific number of frames or time [9, 18, 21, 23, 25, 26] , we observe that the most frequently used features for egocentric video clustering  ... 
doi:10.1109/thms.2016.2623480 fatcat:nerpk2cb55gebkqfqkdmrjvlki

You-Do, I-Learn: Unsupervised Multi-User egocentric Approach Towards Video-Based Guidance [article]

Dima Damen, Teesid Leelasawassuk, Walterio Mayol-Cuevas
2016 arXiv   pre-print
This paper presents an unsupervised approach towards automatically extracting video-based guidance on object usage, from egocentric video and wearable gaze tracking, collected from multiple users while  ...  The paper proposes a method for selecting a suitable video guide to be displayed to a novice user indicating how to use an object, purely triggered by the user's gaze.  ...  Acknowledgement We would like to thank Pished Bunnun, Osian Haines and Andrew Calway for their input on previous iterations of parts of this work. References  ... 
arXiv:1510.04862v2 fatcat:go35figqofdtrlybhfia5cggce
« Previous Showing results 1 — 15 out of 1,459 results