Filters








244 Hits in 6.6 sec

Real-Time Hand Tracking under Occlusion from an Egocentric RGB-D Sensor

Franziska Mueller, Dushyant Mehta, Oleksandr Sotnychenko, Srinath Sridhar, Dan Casas, Christian Theobalt
2017 2017 IEEE International Conference on Computer Vision (ICCV)  
We present an approach for real-time, robust and accurate hand pose estimation from moving egocentric RGB-D cameras in cluttered real environments.  ...  in real time.  ...  Our method can reliably track the hand in 3D even under such conditions using only RGB-D input.  ... 
doi:10.1109/iccv.2017.131 dblp:conf/iccv/MuellerMS0CT17 fatcat:y6wyzugumnffveohczf54ole4u

Real-Time Hand Tracking Under Occlusion from an Egocentric RGB-D Sensor

Franziska Mueller, Dushyant Mehta, Oleksandr Sotnychenko, Srinath Sridhar, Dan Casas, Christian Theobalt
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
We present an approach for real-time, robust and accurate hand pose estimation from moving egocentric RGB-D cameras in cluttered real environments.  ...  in real time.  ...  Our method can reliably track the hand in 3D even under such conditions using only RGB-D input.  ... 
doi:10.1109/iccvw.2017.82 dblp:conf/iccvw/MuellerMS0CT17 fatcat:dqdrwatsqfd5vixaxwzw3fr6lu

3D Hand Pose Detection in Egocentric RGB-D Images [chapter]

Grégory Rogez, Maryam Khademi, J. S. Supančič III, J. M. M. Montiel, Deva Ramanan
2015 Lecture Notes in Computer Science  
Despite the recent advances in full-body pose estimation using Kinect-like sensors, reliable monocular hand pose estimation in RGB-D images is still an unsolved problem.  ...  We focus on the task of hand pose estimation from egocentric viewpoints.  ...  Real-world egocentric RGB-D video (test) Synthetic egocentric RGB-D video (train) assumptions about visibility/occlusion and manual tracker initialization may not hold in an egocentric setting, making  ... 
doi:10.1007/978-3-319-16178-5_25 fatcat:wloilrcfxrbfbeitmjfr74hna4

HOI4D: A 4D Egocentric Dataset for Category-Level Human-Object Interaction [article]

Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen, Boqiang Liang, Zhoujie Fu, He Wang, Li Yi
2022 arXiv   pre-print
HOI4D consists of 2.4M RGB-D egocentric video frames over 4000 sequences collected by 4 participants interacting with 800 different object instances from 16 categories over 610 different indoor rooms.  ...  With HOI4D, we establish three benchmarking tasks to promote category-level HOI from 4D visual signals including semantic segmentation of 4D dynamic point cloud sequences, category-level object pose tracking  ...  In order to construct HOI4D, we build up a simple headmounted data capturing suite consists of a bicycle helmet, a Kinect v2 RGB-D sensor, and an Intel RealSense D455 RGB-D sensor as shown in Figure 2  ... 
arXiv:2203.01577v3 fatcat:kkwisjhrkbgzfp764bt26hd2ra

A Survey on 3D Hand Skeleton and Pose Estimation by Convolutional Neural Network

Van-Hung Le, Hung-Cuong Nguyen
2020 Advances in Science, Technology and Engineering Systems  
In this paper, we surveyed studies in which Convolutional Neural Networks (CNNs) were used to estimate the 3D hand pose from data obtained from the cameras (e.g., RGB camera, depth(D) camera, RGB-D camera  ...  of the datasets collected from egocentric vision sensors, and (iv) methods used to collect and annotate datasets from egocentric vision sensors.  ...  The title is "Using the Lie algebra, Lie group to improve the skeleton hand presentation".  ... 
doi:10.25046/aj050418 fatcat:tzpjnmpwtjbh7m6ld3nucyvxia

EgoCap

Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt
2016 ACM Transactions on Graphics  
Therefore, we propose a new method for real-time, marker-less, and egocentric motion capture: estimating the full-body skeleton pose from a lightweight stereo pair of fisheye cameras attached to a helmet  ...  Alternative suit-based systems use several inertial measurement units or an exoskeleton to capture motion with an inside-in setup, i.e. without external sensors.  ...  Motion Capture with Depth Sensors 3D pose estimation is highly accurate and reliable when using multiple RGB-D cameras [Zhang et al. 2014] , and even feasible from a single RGB-D camera in real time [  ... 
doi:10.1145/2980179.2980235 fatcat:kx3rcoljurb3xgewan4acsb2qa

EgoCap: Egocentric Marker-less Motion Capture with Two Fisheye Cameras [article]

Helge Rhodin, Christian Richardt, Dan Casas, Eldar Insafutdinov, Mohammad Shafiei, Hans-Peter Seidel, Bernt Schiele, Christian Theobalt
2016 arXiv   pre-print
We therefore propose a new method for real-time, marker-less and egocentric motion capture which estimates the full-body skeleton pose from a lightweight stereo pair of fisheye cameras that are attached  ...  Alternative suit-based systems use several inertial measurement units or an exoskeleton to capture motion.  ...  Motion Capture with Depth Sensors 3D pose estimation is highly accurate and reliable when using multiple RGB-D cameras [Zhang et al. 2014] , and even feasible from a single RGB-D camera in real time [  ... 
arXiv:1609.07306v1 fatcat:xatetlqtpbbsbclzgv3ik2fxee

AR in Hand

Hui Liang, Junsong Yuan, Daniel Thalmann, Nadia Magnenat Thalmann
2015 Proceedings of the 23rd ACM international conference on Multimedia - MM '15  
Technically, we use a head-mounted depth camera to capture the RGB-D images from egocentric view, and adopt the random forest to regress for the palm pose and classify the hand gesture simultaneously via  ...  The predicted pose and gesture are used to render the 3D virtual objects, which are overlaid onto the hand region in input RGB images with camera calibration parameters for seamless virtual and real scene  ...  The framework of our system is shown in Fig. 1 , in which we use a head-mounted SoftKinetic DS325 sensor to capture both the RGB and depth images of user's hand from egocentric view and predict the 6-  ... 
doi:10.1145/2733373.2807972 dblp:conf/mm/LiangYTM15 fatcat:hbrgvwk3ffcg3nc2n3mmhcv424

First-person pose recognition using egocentric workspaces

Gregory Rogez, James S. Supancic, Deva Ramanan
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
We achieve state-of-the-art hand pose recognition performance from egocentric RGB-D images in real-time.  ...  We tackle the problem of estimating the 3D pose of an individual's upper limbs (arms+hands) from a chest mounted depth-camera.  ...  GR was supported by the European Commission under FP7 Marie Curie IOF grant "Egovi-sion4Health" (PIOF-GA-2012-328288).  ... 
doi:10.1109/cvpr.2015.7299061 dblp:conf/cvpr/RogezSR15 fatcat:ngd5pvmuj5dadku7um4gzr4dxa

Egocentric recognition of handled objects: Benchmark and analysis

Xiaofeng Ren, Matthai Philipose
2009 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops  
We use this dataset and a SIFT-based recognition system to analyze and quantitatively characterize the main challenges in egocentric object recognition, such as motion blur and hand occlusion, along with  ...  The egocentric viewpoint from a wearable camera has unique advantages in recognizing handled objects, such as having a close view and seeing objects in their natural positions.  ...  Let {p i = (y i , x i ), D i } be {p j = (ỹ j ,x j ),D j } be the SIFT keys in an clean exemplar image.  ... 
doi:10.1109/cvprw.2009.5204360 dblp:conf/cvpr/RenP09 fatcat:7tm42v672jaujmduitcugt5o3e

Egocentric recognition of handled objects: Benchmark and analysis

Xiaofeng Ren, M. Philipose
2009 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops  
We use this dataset and a SIFT-based recognition system to analyze and quantitatively characterize the main challenges in egocentric object recognition, such as motion blur and hand occlusion, along with  ...  The egocentric viewpoint from a wearable camera has unique advantages in recognizing handled objects, such as having a close view and seeing objects in their natural positions.  ...  Let {p i = (y i , x i ), D i } be {p j = (ỹ j ,x j ),D j } be the SIFT keys in an clean exemplar image.  ... 
doi:10.1109/cvpr.2009.5204360 fatcat:jcmhhr4v6bdj3h7xtxbkvf345y

Enhanced Self-Perception in Mixed Reality: Egocentric Arm Segmentation and Database with Automatic Labeling

Ester Gonzalez-Sosa, Pablo Perez, Ruben Tolosana, Redouane Kachach, Alvaro Villegas
2020 IEEE Access  
Results also suggest that, while approaches based on color or depth can work under controlled conditions (lack of occlusion, uniform lighting, only objects of interest in the near range, controlled background  ...  egocentric hand datasets, including GTEA Gaze+, EDSH, EgoHands, Ego Youtube Hands, THU-Read, TEgO, FPAB, and Ego Gesture, which allow for direct comparisons with existing approaches using color or depth  ...  ; on the other hand, RGB-D sensors have a narrow field of view which also impairs the sense of presence [32] .  ... 
doi:10.1109/access.2020.3013016 fatcat:3tigmijzqrfi3bku3l5lr63wiu

Hand Pose Estimation: A Survey [article]

Bardia Doosti
2019 arXiv   pre-print
In this report, we will first explain the hand pose estimation problem and will review major approaches solving this problem, especially the two different problems of using depth maps or RGB images.  ...  Hand Pose Estimation a hot topic in computer vision field.  ...  [30] 2014 Real RGB+D No 26 Ego 3D 2 400 Hands in Action [48] 2014 Real D Yes - 3rd 3D - - MSRA14 [29] 2014 Real D No 21 3rd 3D 6 2,400 Dexter1 [42] 2013 Real RGB+D No  ... 
arXiv:1903.01013v2 fatcat:rqkthrt4mjawlk4p2ij5nut64q

GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB [article]

Franziska Mueller, Florian Bernard, Oleksandr Sotnychenko, Dushyant Mehta, Srinath Sridhar, Dan Casas, Christian Theobalt
2017 arXiv   pre-print
We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence.  ...  We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.  ...  [46, 47] used 5 RGB cameras and an additional depth sensor to demonstrate real-time hand pose estimation.  ... 
arXiv:1712.01057v1 fatcat:7jxgkuoogfbb5itjoq2wbpftpa

GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB

Franziska Mueller, Florian Bernard, Oleksandr Sotnychenko, Dushyant Mehta, Srinath Sridhar, Dan Casas, Christian Theobalt
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Figure 1 : We present an approach for real-time 3D hand tracking from monocular RGB-only input.  ...  We show real-time 3D hand tracking results using an off-the-shelf RGB webcam in unconstrained setups (center-right, right).  ...  [46, 47] used 5 RGB cameras and an additional depth sensor to demonstrate real-time hand pose estimation.  ... 
doi:10.1109/cvpr.2018.00013 dblp:conf/cvpr/MuellerBSM0CT18 fatcat:pw73umrjgjhdzpttehh3uljeu4
« Previous Showing results 1 — 15 out of 244 results