22 Hits in 1.8 sec

HandSeg: An Automatically Labeled Dataset for Hand Segmentation from Depth Images [article]

Abhishake Kumar Bojja, Franziska Mueller, Sri Raghu Malireddi, Markus Oberweger, Vincent Lepetit, Christian Theobalt, Kwang Moo Yi, Andrea Tagliasacchi
2018 arXiv   pre-print
We propose an automatic method for generating high-quality annotations for depth-based hand segmentation, and introduce a large-scale hand segmentation dataset.  ...  Existing datasets are typically limited to a single hand.  ...  One could learn a hand segmenter from a dataset of annotated depth images.  ... 
arXiv:1711.05944v4 fatcat:ebixic45pza5nln336xxvwq3fy

3D Convolutional Neural Networks for Dendrite Segmentation Using Fine-Tuning and Hyperparameter Optimization [article]

Jim James, Nathan Pruyne, Tiberiu Stan, Marcus Schwarting, Jiwon Yeom, Seungbum Hong, Peter Voorhees, Ben Blaiszik, Ian Foster
2022 arXiv   pre-print
In this study, we trained 3D convolutional neural networks (CNNs) to segment 3D datasets. Three CNN architectures were investigated, including a new 3D version of FCDense.  ...  The analysis of 3D datasets is particularly challenging due to their large sizes (terabytes) and the presence of artifacts scattered within the imaged volumes.  ...  University), Elizabeth Holm (Carnegie Mellon University), Joshua Pritz (Northwestern University), Marta Garcia Martinez (Argonne National Laboratory), and Aniket Tekawade (Argonne National Laboratory) for  ... 
arXiv:2205.01167v1 fatcat:pfprxt2mlnacbeek3mbgpy6zqy

BusyHands: A Hand-Tool Interaction Database for Assembly Tasks Semantic Segmentation [article]

Roy Shilkrot, Zhi Chai, Minh Hoai
2019 arXiv   pre-print
A total of 7906 samples are included in our first-in-kind dataset, with both RGB and depth images as obtained from a Kinect V2 camera and Blender.  ...  Visual segmentation has seen tremendous advancement recently with ready solutions for a wide variety of scene types, including human hands and other body parts.  ...  Acknowledgments We would like to thank Nvidia for their generous donation of a Titan Xp and Quadro P5000 GPUs, which were used in this project.  ... 
arXiv:1902.07262v1 fatcat:jpoxtpbribg2begm2x3gl4e5ai

An End-to-end Framework for Unconstrained Monocular 3D Hand Pose Estimation [article]

Sanjeev Sharma, Shaoli Huang, Dacheng Tao
2019 arXiv   pre-print
To achieve robustness, the proposed framework uses a novel keypoint-based method to simultaneously predict hand regions and side labels, unlike existing methods that suffer from background color confusion  ...  Most of the existing approaches assume some prior knowledge of hand (such as hand locations and side information) is available for 3D hand pose estimation.  ...  We determine the largest hand in a dataset by comparing the number of pixels in a segmentation mask for each of the two hands in an image.  ... 
arXiv:1911.12501v1 fatcat:rlvmix53jna4rcfngxlik2rnsa

DexPilot: Vision Based Teleoperation of Dexterous Robotic Hand-Arm System [article]

Ankur Handa, Karl Van Wyk, Wei Yang, Jacky Liang, Yu-Wei Chao, Qian Wan, Stan Birchfield, Nathan Ratliff, Dieter Fox
2019 arXiv   pre-print
Herein, a low-cost, vision based teleoperation system, DexPilot, was developed that allows for complete control over the full 23 DoA robotic system by merely observing the bare human hand.  ...  This allows for collection of high dimensional, multi-modality, state-action data that can be leveraged in the future to learn sensorimotor policies for challenging manipulation tasks.  ...  The optimized mapping was used to label human depth images to learn end-to-end a deep network that can ingest a depth image and output joint angles for the Shadow hand.  ... 
arXiv:1910.03135v2 fatcat:qhhkpmouhbhljma5zfysdjcyte

Two-hand Global 3D Pose Estimation Using Monocular RGB [article]

Fanqing Lin, Connor Wilhelm, Tony Martinez
2020 arXiv   pre-print
To train the CNNs for this new task, we introduce a large-scale synthetic 3D hand pose dataset.  ...  We tackle the challenging task of estimating global 3D joint locations for both hands via only monocular RGB input images.  ...  Given a single RGB image as input, we use HandSeg-Net to simultaneously obtain the segmentation masks and the heatmap energy of both hands.  ... 
arXiv:2006.01320v4 fatcat:axg2q7zwqjdw5ku47km3kyu4hm

Learning to Estimate 3D Hand Pose from Single RGB Images [article]

Christian Zimmermann, Thomas Brox
2017 arXiv   pre-print
We introduce a large scale 3D hand pose dataset based on synthetic hand models for training the involved networks.  ...  Low-cost consumer depth cameras and deep learning have enabled reasonable 3D hand pose estimation from single depth images.  ...  Also we thank Nikolaus Mayer, Benjamin Ummenhofer and Maxim Tatarchenko for valuable ideas and many fruitful discussions.  ... 
arXiv:1705.01389v3 fatcat:ewhfszchprgu7ebnfs6tsjdk2e

Gesture Recognition: Focus on the Hands

Pradyumna Narayana, J. Ross Beveridge, Bruce A. Draper
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Using this technique, we improve performance on the ChaLearn IsoGD dataset from a previous best of 67.71% to 82.07%, and on the NVIDIA dataset from 83.8% to 91.28%.  ...  Gestures are a common form of human communication and important for human computer interfaces (HCI).  ...  For the NVIDIA depth data, we use the heuristic that the right hand is the closest object to the sensor, while for NVIDIA RGB images we use the HandSeg-Net of Zimmerman and Brox [35] .  ... 
doi:10.1109/cvpr.2018.00549 dblp:conf/cvpr/NarayanaBD18 fatcat:6cjb7bsznncuvbndwswnqcvsaq

A Comprehensive Survey of Video Datasets for Background Subtraction

Rudrika Kalsotra, Sakshi Arora
2019 IEEE Access  
This paper presents a comprehensive account of public video datasets for background subtraction and attempts to cover the lack of a detailed description of each dataset.  ...  Finding the appropriate dataset is generally a cumbersome task for an exhaustive evaluation of algorithms.  ...  In addition to RGB and Depth images, manually segmented foreground masks are provided for some video frames as ground-truth information.  ... 
doi:10.1109/access.2019.2914961 fatcat:thr65j4uivehpgxtkwbsqi3yc4

Multi-Stroke Thai Finger-Spelling Sign Language Recognition System with Deep Learning

Thongpan Pariwat, Pusadee Seresangtakul
2021 Symmetry  
This research uses a vision-based technique on a complex background with semantic segmentation performed with dilated convolution for hand segmentation, hand strokes separated using optical flow, and learning  ...  Sign language is a type of language for the hearing impaired that people in the general public commonly do not understand.  ...  The authors would like to extend our thanks to Sotsuksa Khon Kaen School for their assistance and support in the development of the Thai fingerspelling sign language data.  ... 
doi:10.3390/sym13020262 fatcat:ibxq43lz2fd6dce5lmzjumiopm

Contextual Attention for Hand Detection in the Wild

Supreeth Narasimhaswamy, Zhengwei Wei, Yang Wang, Justin Zhang, Minh Hoai Nguyen
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
We present Hand-CNN, a novel convolutional network architecture for detecting hand masks and predicting hand orientations in unconstrained images.  ...  We also introduce large-scale annotated hand datasets containing hands in unconstrained images for training and evaluation.  ...  Many thanks to Tomas Simon for his suggestion about the COCO dataset and Rakshit Gautam for his contribution to the data annotation process.  ... 
doi:10.1109/iccv.2019.00966 dblp:conf/iccv/NarasimhaswamyW19 fatcat:km25up7aifd37im3qzlgvgpqma

Segmentation of arteries in MPRAGE images of the ventral medial prefrontal cortex

N. Penumetcha, B. Jedynak, M. Hosakere, E. Ceyhan, K.N. Botteron, J.T. Ratnanather
2008 Computerized Medical Imaging and Graphics  
The Fast Marching method is used to generate a curve within the artery. Then, the largest connected component is selected to segment the artery which is used to mask the image.  ...  A method for removing arteries that appear bright with intensities similar to white matter in Magnetized Prepared Rapid Gradient Echo images of the ventral medial prefrontal cortex is described.  ...  We thank Suraj Kabadi for technical assistance.  ... 
doi:10.1016/j.compmedimag.2007.08.013 pmid:17964757 pmcid:PMC2873191 fatcat:acrwioz2nvgu3gcffkexsip4jq

Parsing Occluded People

Golnaz Ghiasi, Yi Yang, Deva Ramanan, Charless C. Fowlkes
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition  
Occlusion poses a significant difficulty for object recognition due to the combinatorial diversity of possible occlusion patterns.  ...  The underlying part mixture-structure also allows the model to capture coherence of object support masks between neighboring parts and make compelling predictions of figure-ground-occluder segmentations  ...  We use negative training images from the INRIAPerson database [4] and evaluate models using 190 test images from H3D.  ... 
doi:10.1109/cvpr.2014.308 dblp:conf/cvpr/GhiasiYRF14 fatcat:3xreaqfrmbeafbnl6ni5yy4m3e

Image classification using compression distance [article]

Yuxuan Lan, Richard Harvey
2005 International Conference on Vision, Video and Graphics  
We show that this distance can be used for classification on real images. Furthermore, the same compressor can also operate on derived features with no further modification.  ...  The new classifier operating on these trees produces results that are very similar to those obtained on the raw images thus allowing, for the first time, classification using the full trees.  ...  The foreground of a subset of images (204 in total) taken from the complex images, was hand-labelled to produce the ground truth.  ... 
doi:10.2312/vvg.20051023 dblp:conf/vvg/LanH05 fatcat:uvmvz2ckwzejlevglltp6hnk34

GuideME: Slice-guided Semiautomatic Multivariate Exploration of Volumes

L. Zhou, C. Hansen
2014 Computer graphics forum (Print)  
Shown in this figure is the example of extracting the tumor core in a multimodal MR brain scan data.  ...  In this paper, we propose GuideME: a novel slice-guided semiautomatic multivariate volume exploration approach.  ...  Boundary Confidence Image A boundary confidence image can be derived from the extracted boundary images of user chosen attributesĀs from the pop-up menu in the inspection window to indicate the uncertainty  ... 
doi:10.1111/cgf.12371 fatcat:rx3ikbdzszabvn6vlcl3ouu2he
« Previous Showing results 1 — 15 out of 22 results