Filters








701 Hits in 3.1 sec

Special Issue on Assistive Computer Vision and Robotics - ``Assistive Solutions for Mobility, Communication and HMI''

Giovanni Maria Farinella, Takeo Kanade, Marco Leo, Gerard G. Medioni, Mohan Trivedi
2016 Computer Vision and Image Understanding  
robotic manipulator, egocentric vision.  ...  extracting video-based guidance on object usage, from egocentric video and wearable gaze tracking, collected from multiple users while performing tasks.  ... 
doi:10.1016/j.cviu.2016.05.014 fatcat:ek3uupvcvvfqdh6u6chla56w3q

Special issue on Assistive Computer Vision and Robotics - Part I

Giovanni Maria Farinella, Takeo Kanade, Marco Leo, Gerard G. Medioni, Mohan Trivedi
2016 Computer Vision and Image Understanding  
robotic manipulator, egocentric vision.  ...  of individuals and to improve their quality of life.  ... 
doi:10.1016/j.cviu.2016.05.010 fatcat:tiiosvi5lbagnecmyekiz2p57m

A Sequential Classifier for Hand Detection in the Framework of Egocentric Vision

Alejandro Betancourt
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops  
Hand detection is one of the most explored areas in Egocentric Vision Video Analysis for wearable devices.  ...  this fact could affect the whole performance of the device since hand measurements are usually the starting point for higher level inference, or could lead to inefficient use of computational resources and  ...  One of the most explored areas in egocentric vision video analysis is related to the detection and tracking of the user's hands [13, 17] .  ... 
doi:10.1109/cvprw.2014.92 dblp:conf/cvpr/Betancourt14 fatcat:drxt6iamrzg2vdr3gg7zyqggva

KrishnaCam: Using a longitudinal, single-person, egocentric dataset for scene understanding tasks

Krishna Kumar Singh, Kayvon Fatahalian, Alexei A. Efros
2016 2016 IEEE Winter Conference on Applications of Computer Vision (WACV)  
We record, and analyze, and present to the community, KrishnaCam, a large (7.6 million frames, 70 hours) egocentric video stream along with GPS position, acceleration and body orientation data spanning  ...  nine months of the life of a computer vision graduate student.  ...  Acknowledgments Support for this research was provided by the National Science Foundation (IIS-1422767), the Intel Corporation's Egocentric Video ISRA, and by Google.  ... 
doi:10.1109/wacv.2016.7477717 dblp:conf/wacv/SinghFE16 fatcat:aua4mbe5cjfcrclace6qy4xi44

Walking perception by walking observers

A. Jacobs, M. Shiffrar
2004 Journal of Vision  
motor effort, and potential for action coordination.  ...  These results suggest that the visual analysis of human motion during traditional laboratory studies can differ substantially from the visual analysis of human movement under more realistic conditions.  ...  This finding is consistent with research in other domains of vision science demonstrating differences between egocentric and exocentric perception (e.g., Loomis et al., 1996) .  ... 
doi:10.1167/4.8.218 fatcat:xrbawypozbftdclzyzj2lcxiae

Laplacian Vision

Yuta Itoh, Jason Orlosky, Kiyoshi Kiyokawa, Gudrun Klinker
2016 Proceedings of the 7th Augmented Human International Conference 2016 on - AH '16  
Figure 1 : (a) Our Laplacian Vision system with an Optical See-Through Head-Mounted Display (OST-HMD) and user-view camera placed behind the display screen.  ...  People still often miss a tennis shot, which might cause them to lose the match, or fail to avoid a car or pedestrian, which can lead to injury or even death.  ...  We hope that this work will serve as a cornerstone for vision augmentation research with real-time physics prediction, and will inspire others to create new predictive visualizations.  ... 
doi:10.1145/2875194.2875227 dblp:conf/aughuman/ItohOKK16 fatcat:lklgx2edf5hltcjbqp2owwzeau

Toward Driving Scene Understanding: A Dataset for Learning Driver Behavior and Causal Reasoning

Vasili Ramanishka, Yi-Ting Chen, Teruhisa Misu, Kate Saenko
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
We provide a detailed analysis of HDD with a comparison to other driving datasets.  ...  To achieve systems that can operate in a complex physical and social environment, they need to understand and learn how humans drive and interact with traffic scenes.  ...  The sensor data are synchronized and timestamped using ROS 2 and a customized hardware and software designed for multimodal data analysis.  ... 
doi:10.1109/cvpr.2018.00803 dblp:conf/cvpr/RamanishkaCMS18 fatcat:fry4tklws5agho6kk2gl5ynhym

Allocentric coding: Spatial range and combination rules

D. Camors, C. Jouffrais, B.R. Cottereau, J.B. Durand
2015 Vision Research  
of allocentric and egocentric cues is governed by a coupling prior.  ...  In the present study, we investigated (1) how allocentric coding depends on the distance between the targets and their surrounding landmarks (i.e. the spatial range) and (2) how allocentric and egocentric  ...  Acknowledgments The authors would like to thank Florence Agbazahou for her help during the data collection.  ... 
doi:10.1016/j.visres.2015.02.018 pmid:25749676 fatcat:fmw6sc6wnzeijb4hpxaqux5joi

Visual-GPS: Ego-Downward and Ambient Video Based Person Location Association

Liang Yang, Hao Jiang, Zhouyuan Huo, Jizhong Xiao
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
We propose a novel method (Visual-GPS) to identify, track, and localize the person, who is capturing the egocentric video, using joint analysis of imagery from both videos.  ...  We can track and localize the person by finding the most "correlated" individual in the third view.  ...  Wei Li for help with polishing the paper, and all the anonymous participants involved in data collection.  ... 
doi:10.1109/cvprw.2019.00050 dblp:conf/cvpr/YangJHX19 fatcat:o27jlrtrwzaubmrrcat4xes3ca

Temporal Perception and Prediction in Ego-Centric Video

Yipin Zhou, Tamara L. Berg
2015 2015 IEEE International Conference on Computer Vision (ICCV)  
Experiments indicate that humans and computers can do well on temporal prediction and that personalization to a particular individual or environment provides significantly increased performance.  ...  In this paper we explore two simple tasks related to temporal prediction in egocentric videos of everyday activities.  ...  Acknowledgements We thank David Forsyth for ideas and discussions related to the prediction problem and Vicente Ordonez for useful discussions.  ... 
doi:10.1109/iccv.2015.511 dblp:conf/iccv/ZhouB15 fatcat:bcndbvwuqzhzzczhybitbph6ee

Social interactions: A first-person perspective

A. Fathi, J. K. Hodgins, J. M. Rehg
2012 2012 IEEE Conference on Computer Vision and Pattern Recognition  
Further, individuals are assigned roles based on their patterns of attention. The roles and locations of individuals are analyzed over time to detect and recognize the types of social interactions.  ...  The location and orientation of faces are estimated and used to compute the line of sight for each face.  ...  Acknowledgment Portions of this work were supported in part by ARO MURI award number W911NF-11-1-0046, National Science Foundation award IIS-1029679, and a gift from the Intel Corporation.  ... 
doi:10.1109/cvpr.2012.6247805 dblp:conf/cvpr/FathiHR12 fatcat:mvijo7nawncv3cusfhogtfuc3e

ST-P3: End-to-end Vision-based Autonomous Driving via Spatial-Temporal Feature Learning [article]

Shengchao Hu and Li Chen and Penghao Wu and Hongyang Li and Junchi Yan and Dacheng Tao
2022 arXiv   pre-print
Specifically, an egocentric-aligned accumulation technique is proposed to preserve geometry information in 3D space before the bird's eye view transformation for perception; a dual pathway modeling is  ...  Source code, model and protocol details are made publicly available at https://github.com/OpenPerceptionX/ST-P3.  ...  This work is also supported in part by National Key Research and Development Program of China (2020AAA0107600), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and NSFC (61972250  ... 
arXiv:2207.07601v2 fatcat:gyj67ir7x5avfk3i2fahhb2hpu

Deep Learning for Vision-based Prediction: A Survey [article]

Amir Rasouli
2020 arXiv   pre-print
In addition, we discuss the common evaluation metrics and datasets used for vision-based prediction tasks.  ...  A database of all the information presented in this survey including, cross-referenced according to papers, datasets and metrics, can be found online at https://github.com/aras62/vision-based-prediction  ...  vehicle dynamics are fed into individual LSTM units.  ... 
arXiv:2007.00095v2 fatcat:ushhfnblqjatdluhseqtgrfhwu

Egocentric Field-of-View Localization Using First-Person Point-of-View Devices

Vinay Bettadapura, Irfan Essa, Caroline Pantofaru
2015 2015 IEEE Winter Conference on Applications of Computer Vision  
We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions  ...  We define egocentric FOV localization as capturing the visual information from a person's field-of-view in a given environment and transferring this information onto a reference corpus of images and videos  ...  Note the change in season and pedestrian traffic between the POV images and the reference image. Figure 3 . 3 Egocentric FOV localization in indoor environments.  ... 
doi:10.1109/wacv.2015.89 dblp:conf/wacv/BettadapuraEP15 fatcat:ti53grmpqrethlljckdjq5gapa

Depth-Based Hand Pose Estimation: Methods, Data, and Challenges

James Steven Supančič, Grégory Rogez, Yi Yang, Jamie Shotton, Deva Ramanan
2018 International Journal of Computer Vision  
We provide an extensive analysis of the state-of-the-art, focusing on hand pose estimation from a single depth frame.  ...  To do so, we have implemented a considerable number of systems, and have released software and evaluation code.  ...  ., and Perona, P. (2012). Pedestrian detection: An evaluation of the state of the art. Pattern Analysis and Machine Intelligence, IEEE Transactions on. 2 12.  ... 
doi:10.1007/s11263-018-1081-7 fatcat:4ypnulddljdcxelywltdkwne6e
« Previous Showing results 1 — 15 out of 701 results