Filters








41,209 Hits in 4.2 sec

Hand-Held Object with Action Recognition Based On Convolutional Neural Network in Spatio Temporal Domain

2019 International Journal of Engineering and Advanced Technology  
The Hand-held Object Recognition (HHOR) assigns a label for the object which is heldin hand this could help machines in understanding the environment and the intention of the people.  ...  In HCI, Hand-held object recognition hasamain role. This approach helps the computer to realise the user's intentions and also meetsthe user requirements.  ...  Hand-held object recognition HHORbased studies arecategorised into two types i.e. interface of firstperson and interface of second-person.  ... 
doi:10.35940/ijeat.a1901.129219 fatcat:ztfwx6h4ivfrnczewdjalpcsdy

Hybrid incremental learning of new data and new classes for hand-held object recognition

Chengpeng Chen, Weiqing Min, Xue Li, Shuqiang Jiang
2019 Journal of Visual Communication and Image Representation  
As a very special yet important case of object recognition, hand-held object recognition plays an important role in intelligence technology for its many applications such as visual question-answering and  ...  We apply the proposed method into hand-held object recognition and the experimental results demonstrated its advantage of HIL.  ...  Dataset HOD-20 HOD [1] dataset is designed for hand-held object recognition.  ... 
doi:10.1016/j.jvcir.2018.11.009 fatcat:ufingwydkffoxbrwo5hdvui3sq

Learning to Recognize Hand-Held Objects from Scratch [chapter]

Xue Li, Shuqiang Jiang, Xiong Lv, Chengpeng Chen
2016 Lecture Notes in Computer Science  
In this work, we present a hand-held object recognition system which could incrementally enhance its recognition ability from beginning during the interaction with humans.  ...  Automatically capturing the images of hand-held objects and the voice of users, our system could refer the interacting person as a strong teacher.  ...  As manipulating objects with hands is a straight way for human-machine interaction [19, 14, 15] , hand-held object recognition is a special and important case in object recognition.  ... 
doi:10.1007/978-3-319-48896-7_52 fatcat:r44mefpxlna4zpx5qjwjk2nwva

Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons

Chucai Yi, Yingli Tian, Aries Arditi
2014 IEEE/ASME transactions on mechatronics  
We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily lives.  ...  The proof-of-concept prototype is also evaluated on a dataset collected using 10 blind persons, to evaluate the effectiveness of the system's hardware.  ...  ACKNOWLEDGMENTS The authors thank the anonymous reviewers for their constructive comments and insightful suggestions that improved the quality of this manuscript.  ... 
doi:10.1109/tmech.2013.2261083 fatcat:2l6hnz7vazhyjf6xbqjlk5xkge

IEEE/ASME Transactions on Mechatronics

2016 IEEE/ASME transactions on mechatronics  
We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily lives.  ...  The proof-of-concept prototype is also evaluated on a dataset collected using 10 blind persons, to evaluate the effectiveness of the system's hardware.  ...  ACKNOWLEDGMENTS The authors thank the anonymous reviewers for their constructive comments and insightful suggestions that improved the quality of this manuscript.  ... 
doi:10.1109/tmech.2016.2532658 fatcat:uuqs6qen6jfhpom7cfpxqpvpka

IEEE/ASME Transactions on Mechatronics

2016 IEEE/ASME transactions on mechatronics  
We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily lives.  ...  The proof-of-concept prototype is also evaluated on a dataset collected using 10 blind persons, to evaluate the effectiveness of the system's hardware.  ...  ACKNOWLEDGMENTS The authors thank the anonymous reviewers for their constructive comments and insightful suggestions that improved the quality of this manuscript.  ... 
doi:10.1109/tmech.2016.2556358 fatcat:ayqtsiyubbelnbo7ocipwn2xhq

IEEE/ASME Transactions on Mechatronics

2016 IEEE/ASME transactions on mechatronics  
We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily lives.  ...  The proof-of-concept prototype is also evaluated on a dataset collected using 10 blind persons, to evaluate the effectiveness of the system's hardware.  ...  ACKNOWLEDGMENTS The authors thank the anonymous reviewers for their constructive comments and insightful suggestions that improved the quality of this manuscript.  ... 
doi:10.1109/tmech.2016.2583704 fatcat:dxfcxuifsrgcxn4v32n7pxbhn4

REAL TIME DETECTION AND RECOGNITION OF HAND HELD OBJECTS TO ASSIST BLIND PEOPLE

2017 International Journal of Advance Engineering and Research Development  
Said for her expert advice and encouragement throughout this difficult project ,as well as project coordinator Dr.K.S. Wagh and Head of Department Prof. S.N. Zaware.  ...  We present a camera-oriented object recognition application to help visually impaired people to recognize hand held objects in their day to day lives.  ...  Proposed System 3.1 System Architecture We propose an optical sensor-based object recognition system to help visually impaired persons to recognize hand held objects in their day to day lives.  ... 
doi:10.21090/ijaerd.rtde20 fatcat:gpmdhnjwdnfvfns663cy35vst4

Portable Camera-Based Product Label Reading For Blind People
English

Rajkumar N, Anand M.G, Barathiraja N
2014 International Journal of Engineering Trends and Technoloy  
We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily life.  ...  The proof-of-concept example is also evaluated on a dataset collected using ten blind persons to evaluate the effectiveness of the scheme.  ...  from hand held objects.  ... 
doi:10.14445/22315381/ijett-v10p303 fatcat:frtpgztht5glnjrmgtyphvjise

Understanding hand-object manipulation by modeling the contextual relationship between actions, grasp types and object attributes [article]

Minjie Cai, Kris Kitani, Yoichi Sato
2018 arXiv   pre-print
This paper proposes a novel method for understanding daily hand-object manipulation by developing computer vision-based techniques.  ...  in hand-object manipulation.  ...  For example, the action of scoop indicates the high probability of combination of a container (such as a bottle) held by some power grasp and a long-shape tool (such as a spoon) held by some precision  ... 
arXiv:1807.08254v1 fatcat:bikfkhkjvfbjfgng4tbv2kx2di

Combining Pose-Invariant Kinematic Features and Object Context Features for RGB-D Action Recognition

Manoj Ramanathan, Institute for Media Innovation, Nanyang Technological University, Singapore, Jaroslaw Kochanowicz, Nadia Magnenat Thalmann
2019 International Journal of Machine Learning and Computing  
For capturing object context features, a convolutional neural network (CNN) classifier is proposed to identify the involved objects.  ...  This study aims to propose a novel pose-invariant action recognition framework based on kinematic features and object context features.  ...  ACKNOWLEDGEMENTS This research is supported by the BeingTogether Centre, a collaboration between Nanyang Technological University (NTU) Singapore and University of North Carolina (UNC) at Chapel Hill.  ... 
doi:10.18178/ijmlc.2019.9.1.763 fatcat:etvpm6gfnfasjgetbcinuugpda

Automatisches Kommissionieren [chapter]

Werner Pivit
1985 Montage · Handhabung · Industrieroboter  
Therefore, a smartwatch and a low-cost camera which are both worn by the picker are combined with activity and object recognition methods for surveying the picking process.  ...  Then, barcode detection and a CNN (Convolutional Neural Network) based object recognition approach are employed for recognizing whether the correct item is chosen.  ...  SMALL HAND-HELD OBJECTS DATASET For comparison, the method has also been evaluated on the SHORT dataset for small hand-held object recognition [14] .  ... 
doi:10.1007/978-3-662-30428-0_18 fatcat:mt4aitf6pngwrcsb3fvxokr7ee

An Object is Worth Six Thousand Pictures: The Egocentric, Manual, Multi-image (EMMI) Dataset

Xiaohan Wang, Fernanda M. Eliott, James Ainooson, Joshua H. Palmer, Maithilee Kunda
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
We describe a new image dataset, the Egocentric, Manual, Multi-Image (EMMI) dataset, collected to enable the study of how appearance-related and distributional properties of visual experience affect learning  ...  We also present results from initial experiments, using deep convolutional neural networks, that begin to examine how different distributions of training data can affect visual object recognition, and  ...  This work was supported in part by a Vanderbilt Discovery Grant, titled "New Explorations in Visual Object Recognition."  ... 
doi:10.1109/iccvw.2017.279 dblp:conf/iccvw/WangEAPK17 fatcat:d5inl3ob5bgcjfvqnp3k4uvv2i

PFID: Pittsburgh fast-food image dataset

Mei Chen, Kapil Dhingra, Wen Wu, Lei Yang, Rahul Sukthankar, Jie Yang
2009 2009 16th IEEE International Conference on Image Processing (ICIP)  
This work was motivated by research on fast food recognition for dietary assessment.  ...  We introduce the first visual dataset of fast foods with a total of 4,545 still images, 606 stereo pairs, 303 360 0 videos for structure from motion, and 27 privacy-preserving videos of eating events of  ...  INTRODUCTION Image datasets are a prerequisite to visual object recognition research such as object modeling, detection, classification, and recognition.  ... 
doi:10.1109/icip.2009.5413511 dblp:conf/icip/ChenDWYSY09 fatcat:kwczxwzsrnfw3eqp6vcdgemi6e

Multiview RGB-D Dataset for Object Instance Detection [article]

Georgios Georgakis, Md Alimoor Reza, Arsalan Mousavian, Phi-Hung Le, Jana Kosecka
2016 arXiv   pre-print
for object detection and recognition.  ...  This paper presents a new multi-view RGB-D dataset of nine kitchen scenes, each containing several objects in realistic cluttered environments including a subset of objects from the BigBird dataset.  ...  Some of the experiments were run on ARGO, a research computing cluster provided by the Office of Research Computing at George Mason University, VA. (URL: http://orc.gmu.edu).  ... 
arXiv:1609.07826v1 fatcat:hcn6tpj5xvgdbabor5qwqh6som
« Previous Showing results 1 — 15 out of 41,209 results