Filters








1,292 Hits in 4.7 sec

Ego-Surfing First-Person Videos

Ryo Yonetani, Kris Kitani, Yoichi Sato
2017 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Motivated by these benefits and risks, we developed a self-search technique tailored to first-person videos.  ...  We envision a future time when wearable cameras are worn by the masses and recording first-person point-of-view videos of everyday life.  ...  Much like ego-surfing enables us to perform an Internet search with our own name, we believe that selfsearch in first-person videos can empower users to monitor and manage their own personal data.  ... 
doi:10.1109/tpami.2017.2771767 pmid:29990151 fatcat:xrocmzxdfndnxkko4w2n3mya4y

Ego-surfing first person videos

Ryo Yonetani, Kris M. Kitani, Yoichi Sato
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Motivated by these benefits and risks, we develop a self-search technique tailored to first-person POV videos.  ...  ., small cameras in glasses or pinned on a shirt collar) are worn by the masses and record first-person point-of-view (POV) videos of everyday life.  ...  Much like ego-surfing enables us to perform an Internet search with our own name, we believe that selfsearch in first-person videos can empower users to monitor and manage their own personal data.  ... 
doi:10.1109/cvpr.2015.7299183 dblp:conf/cvpr/YonetaniKS15 fatcat:5xd53ydvwva5jcclmedy2y6ime

Fast unsupervised ego-action learning for first-person sports videos

Kris M. Kitani, Takahiro Okabe, Yoichi Sato, Akihiro Sugimoto
2011 CVPR 2011  
Portable high-quality sports cameras (e.g. head or helmet mounted) built for recording dynamic first-person video footage are becoming a common item among many sports enthusiasts.  ...  Our approach assumes a completely unsupervised scenario, where labeled training videos are not available, videos are not pre-segmented and the number of ego-action categories are unknown.  ...  To the best of our knowledge, this is the first work to deal with the novel task of discovering ego-action categories from first-person sports videos.  ... 
doi:10.1109/cvpr.2011.5995406 dblp:conf/cvpr/KitaniOSS11 fatcat:hf24ho4g7jfm3nxddavtf52s7i

Obstacle Detection Techniques in Outdoor Environment: Process, Study and Analysis

Yadwinder Singh, Lakhwinder Kaur
2017 International Journal of Image Graphics and Signal Processing  
Obstacle detection is the process in which the upcoming objects in the path are detected and collision with them is avoided by some sort of signalling to the visually impaired person.  ...  Survey discusses the analysis of the associated work reported in literature in the field of SURF and SIFTS features, monocular vision based approaches, texture features and ground plane obstacle detection  ...  [11] gave an ego-motion evaluation technique and grouping approach to slice ambulant obstacles from videos.  ... 
doi:10.5815/ijigsp.2017.05.05 fatcat:7fuigoy4czhnnaygb3f4c7it6m

First Person Action Recognition Using Deep Learned Descriptors

Suriya Singh, Chetan Arora, C. V. Jawahar
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We focus on the problem of wearer's action recognition in first person a.k.a. egocentric videos.  ...  This problem is more challenging than third person activity recognition due to unavailability of wearer's pose and sharp movements in the videos caused by the natural head motion of the wearer.  ...  We focus on the recognition of wearer's actions (or first person actions) from egocentric videos in each frame.  ... 
doi:10.1109/cvpr.2016.287 dblp:conf/cvpr/SinghAJ16 fatcat:xy7qjd3cvbgydmchfwv2upz3ju

Summarization of Egocentric Moving Videos for Generating Walking Route Guidance [chapter]

Masaya Okamoto, Keiji Yanai
2014 Lecture Notes in Computer Science  
In the experiments, we prepared an egocentric moving video dataset including more than one-hour-long videos totally, and evaluated crosswalk detection and ego-motion classification methods.  ...  To summarize an egocentric video, we analyze it by applying pedestrian crosswalk detection as well as ego-motion classification, and estimate an importance score of each section of the given video.  ...  Ego-motion Classification In general, a video taken by a wearable camera is called "an egocentric video" or "a first-person video".  ... 
doi:10.1007/978-3-642-53842-1_37 fatcat:hy6zd27pnzekla34qasb4os2ay

Joint Person Segmentation and Identification in Synchronized First- and Third-person Videos [article]

Mingze Xu, Chenyou Fan, Yuchen Wang, Michael S Ryoo, David J Crandall
2018 arXiv   pre-print
people across different views (i.e., determine who in camera A corresponds with whom in camera B), and (2) given one or more synchronized third-person videos as well as a first-person video taken by a  ...  mobile or wearable camera, segment and identify the camera wearer in the third-person videos.  ...  We also evaluate our models on a subset of UTokyo Ego-Surf [40] , which contains 8 diverse groups of first-person videos recorded synchronously during face-to-face conversations in both indoor and outdoor  ... 
arXiv:1803.11217v2 fatcat:funa2km33jgdlgepghqmm2juqu

Joint Person Segmentation and Identification in Synchronized First- and Third-Person Videos [chapter]

Mingze Xu, Chenyou Fan, Yuchen Wang, Michael S. Ryoo, David J. Crandall
2018 Lecture Notes in Computer Science  
people across different views (i.e., determine who in camera A corresponds with whom in camera B), and (2) given one or more synchronized third-person videos as well as a first-person video taken by a  ...  mobile or wearable camera, segment and identify the camera wearer in the third-person videos.  ...  We also evaluate our models on a subset of UTokyo Ego-Surf [40] , which contains 8 diverse groups of first-person videos recorded synchronously during face-to-face conversations in both indoor and outdoor  ... 
doi:10.1007/978-3-030-01246-5_39 fatcat:kkeaw3gbvndwtg3ydoolmx75h4

Recognition of Activities of Daily Living with Egocentric Vision: A Review

Thi-Hoa-Cuc Nguyen, Jean-Christophe Nebel, Francisco Florez-Revuelta
2016 Sensors  
Video-based recognition of activities of daily living (ADLs) is being used in ambient assisted living systems in order to support the independent living of older people.  ...  Interactions, such as hand shake, are captured in the first-person social interactions, the Jet Propulsion Laboratory (JPL) First-person Interaction and National University of Singapore (NUS) First-person  ...  Name http://www.vision.huji.ac.il/ egoseg/videos/dataset.html Unconstrained: 260 videos including 8 interactions in 2 perspectives, third-person and first-person) to create a total of 16 action classes  ... 
doi:10.3390/s16010072 pmid:26751452 pmcid:PMC4732105 fatcat:okm2fswkjrdzleelae46u3nfna

Precise vehicle ego-localization using feature matching of pavement images

Zijun Jiang, Zhigang Xu, Yunchao Li, Haigen Min, Jingmei Zhou
2020 Journal of Intelligent and Connected Vehicles  
Aiming at the problems above, this paper aims to propose a precise vehicle ego-localization method based on image matching.  ...  First of all, fast index matching for preliminary screening of the SURF algorithm continues to be used.  ...  These localization methods can be roughly divided into three categories as follows: 1 passive localization method based on video surveillance (Chapuis et al., 2002) ; 2 ego-localization based on scene  ... 
doi:10.1108/jicv-12-2019-0015 fatcat:mqw2vwt455cmdnqrwirxswsu3i

Gesture Recognition in Ego-centric Videos Using Dense Trajectories and Hand Segmentation

Lorenzo Baraldi, Francesco Paci, Giuseppe Serra, Luca Benini, Rita Cucchiara
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops  
We present a novel method for monocular hand gesture recognition in ego-vision scenarios that deals with static and dynamic gestures and can achieve high accuracy results using a few positive samples.  ...  Introduction Ego-centric vision is a paradigm that joins in the same loop humans and wearable devices to augment the subject vision capabilities by automatically processing videos captured with a first-person  ...  However, in first-person camera views hands movement is not consistent with camera motion and this generates wrong matches between the two frames.  ... 
doi:10.1109/cvprw.2014.107 dblp:conf/cvpr/BaraldiPSBC14 fatcat:cqkao6qfhzc5zee7yqgbbjv4hm

ON-LINE INCREMENTAL 3D HUMAN BODY RECONSTRUCTION FOR HMI OR AR APPLICATIONS

L. ALMEIDA, F. VASCONCELOS, J.P. BARRETO, P. MENEZES, J. DIAS
2011 Field Robotics  
There is a wide variety of research opportunities including high performance imaging, multi-view video, virtual view synthesis, etc.  ...  Our approach explores virtual view synthesis through motion body estimation and hybrid sensors composed by video cameras and a depth camera based on structured-light or time-of-flight.  ...  Phones and internet chat/audio/video conferencing programs (ex: VOIP, NetMeet-ing, Skype) are not able to create the remote person presence feeling.  ... 
doi:10.1142/9789814374286_0041 fatcat:tm6laskjkjfw5pl7hxetk5guha

Accuracy of Trajectories Estimation in a Driver-Assistance Context [chapter]

Waqar Khan, Reinhard Klette
2014 Lecture Notes in Computer Science  
Our comparison of different feature-point matchers gives a general impression of how descriptor performance degrades as a rigid object approaches the ego-vehicle in a collision-scenario video sequence.  ...  To understand the behaviour of the safety system, we used the DoG detector in combination with SURF, BRIEF, and FREAK descriptors, while linBP and iSGM are used as stereo matchers.  ...  The observer is a person holding a flag. The flag is raised after the ego-vehicle, which is on a collision course, crosses the marker on the road.  ... 
doi:10.1007/978-3-642-53926-8_5 fatcat:ldggprp4wjbqrca2i6lcvnw3de

Human Daily Activities Indexing in Videos from Wearable Cameras for Monitoring of Patients with Dementia Diseases

Svebor Karaman, Jenny Benois-Pineau, Remi Megret, Vladislavs Dovgalecs, Jean-Francois Dartigues, Yann Gaestel
2010 2010 20th International Conference on Pattern Recognition  
First results in recognition of activities are promising.  ...  We define a structural model of video recordings based on a Hidden Markov Model. New spatio-temporal features, color features and localization features are proposed as observations.  ...  Translation Parameter Histogram Translational parameters of the affine model (Eq. 1) are good indicators on the strength of ego-motion of a person which differs depending on the activities.  ... 
doi:10.1109/icpr.2010.999 dblp:conf/icpr/KaramanBMDDG10 fatcat:mjwusky7sra7hkee7u3orqehdy

Investigation On Structural Relationship Among Adolescents' Media Use, Self-Directed Learning, And Ego-Resilience

Youngsun Cho, Jisuk Kim, Miseok Yang
2015 Procedia - Social and Behavioral Sciences  
' and 'personal life' out of media use subordinate factors.  ...  The first year's data of Korea children and youth panel survey from National Youth Policy Institute is used to investigate.  ...  They watch TV, play games, listen to music, read books, and enjoy surfing the internet in their daily lives.  ... 
doi:10.1016/j.sbspro.2015.04.528 fatcat:gquflbkgxneq5nwz75dhisizvi
« Previous Showing results 1 — 15 out of 1,292 results