Filters








11 Hits in 2.4 sec

HOnnotate: A method for 3D Annotation of Hand and Object Poses [article]

Shreyas Hampali, Mahdi Rad, Markus Oberweger, Vincent Lepetit
2020 arXiv   pre-print
We propose a method for annotating images of a hand manipulating an object with the 3D poses of both the hand and the object, together with a dataset created using this method.  ...  With this method, we created HO-3D, the first markerless dataset of color images with 3D annotations for both the hand and object.  ...  a method for labelling real images of hand+object interaction with the 3D poses of the hand and of the object.  ... 
arXiv:1907.01481v6 fatcat:cgqanwhwp5ch3jqvnnyhqwru7y

HOnnotate: A Method for 3D Annotation of Hand and Object Poses

Shreyas Hampali, Mahdi Rad, Markus Oberweger, Vincent Lepetit
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
We propose a method for annotating images of a hand manipulating an object with the 3D poses of both the hand and the object, together with a dataset created using this method.  ...  With this method, we created HO-3D, the first markerless dataset of color images with 3D annotations for both the hand and object.  ...  introduce a method for labelling real images of hand+object interaction with the 3D poses of the hand and of the object.  ... 
doi:10.1109/cvpr42600.2020.00326 dblp:conf/cvpr/HampaliROL20 fatcat:3vqj2f5lvfbyjgcncygjepsbzu

HO-3D_v3: Improving the Accuracy of Hand-Object Annotations of the HO-3D Dataset [article]

Shreyas Hampali, Sayan Deb Sarkar, Vincent Lepetit
2021 arXiv   pre-print
HO-3D is a dataset providing image sequences of various hand-object interaction scenarios annotated with the 3D pose of the hand and the object and was originally introduced as HO-3D_v2.  ...  HO-3D_v3 provides more accurate annotations for both the hand and object poses thus resulting in better estimates of contact regions between the hand and the object.  ...  HO-3D is a dataset providing image sequences of various hand-object interaction scenarios annotated with the 3D pose of the hand and the object and was introduced in [1] as version HO-3D v2.  ... 
arXiv:2107.00887v1 fatcat:yyc26ptllbe3xp36gawu4gdvya

TOCH: Spatio-Temporal Object-to-Hand Correspondence for Motion Refinement [article]

Keyang Zhou, Bharat Lal Bhatnagar, Jan Eric Lenssen, Gerard Pons-Moll
2022 arXiv   pre-print
We present TOCH, a method for refining incorrect 3D hand-object interaction sequences using a data prior.  ...  The core of our method are TOCH fields, a novel spatio-temporal representation for modeling correspondences between hands and objects during interaction.  ...  Gerard Pons-Moll is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 -Project number 390727645.  ... 
arXiv:2205.07982v2 fatcat:qwte6qcbqfgqxpoknensy7bmuq

H2O: Two Hands Manipulating Objects for First Person Interaction Recognition [article]

Taein Kwon, Bugra Tekin, Jan Stuhmer, Federica Bogo, Marc Pollefeys
2021 arXiv   pre-print
Our method produces annotations of the 3D pose of two hands and the 6D pose of the manipulated objects, along with their interaction labels for each frame.  ...  We present a comprehensive framework for egocentric interaction recognition using markerless 3D annotations of two hands manipulating objects.  ...  , for the first time, the 3D pose of two interacting hands and the 6D pose of manipulated objects, along with action and object classes. • Leveraging our dataset, we propose a novel method for 3D interaction  ... 
arXiv:2104.11181v2 fatcat:6hjwuoctcfeytofsot5wsicpe4

HOI4D: A 4D Egocentric Dataset for Category-Level Human-Object Interaction [article]

Yunze Liu, Yun Liu, Che Jiang, Kangbo Lyu, Weikang Wan, Hao Shen, Boqiang Liang, Zhoujie Fu, He Wang, Li Yi
2022 arXiv   pre-print
Frame-wise annotations for panoptic segmentation, motion segmentation, 3D hand pose, category-level object pose and hand action have also been provided, together with reconstructed object meshes and scene  ...  We present HOI4D, a large-scale 4D egocentric dataset with rich annotations, to catalyze the research of category-level human-object interaction.  ...  A lot of these datasets focus on recognizing daily activities [4, 12, 17, 29, 37] and provide mostly 2D features omitting 3D annotations such as 3D hand poses and object poses, which are crucial for  ... 
arXiv:2203.01577v3 fatcat:kkwisjhrkbgzfp764bt26hd2ra

Recent Advances in 3D Object and Hand Pose Estimation [article]

Vincent Lepetit
2020 arXiv   pre-print
In this chapter, we present the recent developments for 3D object and hand pose estimation using cameras, and discuss their abilities and limitations and the possible future development of the field.  ...  3D object and hand pose estimation have huge potentials for Augmented Reality, to enable tangible interfaces, natural interfaces, and blurring the boundaries between the real and virtual worlds.  ...  HOnnotate [ Hampali et al., 2020 ] [Hampali et al., 2020] proposed a method to automatically annotate real images of hands grasping objects with their 3D poses (see Figure 6 (f)), which works with a single  ... 
arXiv:2006.05927v1 fatcat:ttroutu7ljgzvf4joqexpz6rai

Unsupervised Domain Adaptation with Temporal-Consistent Self-Training for 3D Hand-Object Joint Reconstruction [article]

Mengshi Qi, Edoardo Remelli, Mathieu Salzmann, Pascal Fua
2020 arXiv   pre-print
Deep learning-solutions for hand-object 3D pose and shape estimation are now very effective when an annotated dataset is available to train them to handle the scenarios and lighting conditions they will  ...  We will demonstrate that our approach outperforms state-of-the-art 3D hand-object joint reconstruction methods on three widely-used benchmarks and will make our code publicly available.  ...  Krishnan, Method for 3D Annotation of Hand and Object Poses,” in Conference “Unsupervised Pixel-Level Domain Adaptation with Generative Ad- on Computer Vision and Pattern Recognition  ... 
arXiv:2012.11260v1 fatcat:wzloca4avzbtfcumf26bek6dc4

Local and Global Point Cloud Reconstruction for 3D Hand Pose Estimation [article]

Ziwei Yu, Linlin Yang, Shicheng Chen, Angela Yao
2021 arXiv   pre-print
This paper addresses the 3D point cloud reconstruction and 3D pose estimation of the human hand from a single RGB image.  ...  To that end, we present a novel pipeline for local and global point cloud reconstruction using a 3D hand template while learning a latent representation for pose estimation.  ...  Honnotate: A method for 3d annotation of hand and object poses.  ... 
arXiv:2112.06389v1 fatcat:n6zujacknrfqllnld4vaulia5m

A survey of image labelling for computer vision applications

Christoph Sager, Christian Janiesch, Patrick Zschech
2021 Journal of Business Analytics  
We perform a structured literature review to compile the underlying concepts and features of image labelling software such as annotation expressiveness and degree of automation.  ...  Supervised machine learning methods for image analysis require large amounts of labelled training data to solve computer vision problems.  ...  ., 2019 ) that supports the semi-automated labelling of hand poses and shapes, and HOnnotate (Hampali et al., 2020) , which makes use of depth images.  ... 
doi:10.1080/2573234x.2021.1908861 fatcat:6c5ro47ibbf5ndmaqzbwciyxdm

Table of Contents

2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Tan (National University of Singapore; Yale-NUS College), and Loong-Fah Cheong (National University of Singapore) HOnnotate: A Method for 3D Annotation of Hand and Object Poses 3193 Shreyas Hampali (Institute  ...  ., CAS, Beijing, China; Peng Cheng Laboratory, Shenzhen, China) HandVoxNet: Deep Voxel-Based Network for 3D Hand Shape and Pose Estimation From a Single Depth Map 7111 Jameel Malik (TU Kaiserslautern  ... 
doi:10.1109/cvpr42600.2020.00004 fatcat:c7els2kee5cq7lh6cemeqhdcoa