Filters








12,363 Hits in 10.3 sec

Classifying Object Manipulation Actions based on Grasp-types and Motion-Constraints [article]

Kartik Gupta, Darius Burschka, Arnav Bhavsar
2018 arXiv   pre-print
involving information of grasp and motion-constraints, c) Fine-grained and Coarse-grained object manipulation action recognition based on fine-grained as well as coarse-grained grasp type information,  ...  We propose to use grasp and motion-constraints information to recognise and understand action intention with different objects.  ...  BASED ON FINE LEVEL 33 GRASP TYPES AND COMBINING THEM WITH REST GRASP ATTRIBUTES, MOTION-CONSTRAINTS USING DIFFERENT MULTI-CLASS AND BINARY CLASSIFIERS.  ... 
arXiv:1806.07574v1 fatcat:4vtq63rgxbbkjlr5zbalw2oufe

Effectiveness of Grasp Attributes and Motion-Constraints for Fine-Grained Recognition of Object Manipulation Actions

Kartik Gupta, Darius Burschka, Arnav Bhavsar
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)  
Our results clearly demonstrate the usefulness of grasp characteristics and motion-constraints, to understand actions intended with an object.  ...  We propose to leverage grasp and motion-constraints information, using a suitable representation, to recognize and understand action intention with different objects.  ...  and functional class (use and hold), whereas we propose a solution to task or manipulation action classification based on the grasp information, motion-constraints, and object class.  ... 
doi:10.1109/cvprw.2016.156 dblp:conf/cvpr/GuptaBB16 fatcat:7hjrhcaaunb75ax5tfbnqtz7pu

Templates for pre-grasp sliding interactions

Daniel Kappler, Lillian Y. Chang, Nancy S. Pollard, Tamim Asfour, Rüdiger Dillmann
2012 Robotics and Autonomous Systems  
The template information focuses the search based on the object features, resulting in increased success of adapting a template pose and decreased planning time.  ...  The motion planning is tractable because information from pre-grasp manipulation examples reduces the search space to promising hand poses and shapes.  ...  The fourth and fifth authors' work described in this paper were partially conducted within the EU Cognitive Systems project GRASP (IST-FP7-IP-215821) funded by the European Commission and the German Humanoid  ... 
doi:10.1016/j.robot.2011.07.015 fatcat:7wwcmpxeczaqnje6zslqcihozy

From human action understanding to robot action execution: how the physical properties of handled objects modulate non-verbal cues

Nuno Ferreira Duarte, Konstantinos Chatzilygeroudis, Jose Santos-Victor, Aude Billard
2020 2020 Joint IEEE 10th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)  
We then included these models into the design of an online classifier that identifies the type of action, based on the human wrist movement.  ...  We close the loop from action understanding to robot action execution with an adaptive and robust controller based on the learned classifier, and evaluate the entire pipeline on a collaborative task with  ...  Based on this observation, we propose a classifier to predict the type of behaviour during the movement instead of recognizing the action only after its completion.  ... 
doi:10.1109/icdl-epirob48136.2020.9278084 fatcat:6zdiuag5yjcpxj46ixwa3nawky

A Whole-Body Pose Taxonomy for Loco-Manipulation Tasks [article]

Júlia Borràs, Tamim Asfour
2015 arXiv   pre-print
The taxonomy induces a classification of motion primitives based on the pose used for support, and a set of rules to store and generate new motions.  ...  Using motion capture data with multi-contacts, we can identify support poses providing a segmentation that can distinguish between locomotion and manipulation parts of an action.  ...  The authors would like to thankÖmer Terlemez, Mirko Wächter and Christian Mandery for their collaboration and fruitful discussions about this work.  ... 
arXiv:1503.06839v2 fatcat:f4uun2kjb5abbmkmad4zvwgceq

A whole-body pose taxonomy for loco-manipulation tasks

Julia Borras, Tamim Asfour
2015 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
The taxonomy induces a classification of motion primitives based on the pose used for support, and a set of rules to store and generate new motions.  ...  Using motion capture data with multi-contacts, we can identify support poses providing a segmentation that can distinguish between locomotion and manipulation parts of an action.  ...  The authors would like to thank Ömer Terlemez, Mirko Wächter and Christian Mandery for their collaboration and fruitful discussions about this work.  ... 
doi:10.1109/iros.2015.7353578 dblp:conf/iros/SolA15 fatcat:nvyc73zm75gpvlj24bvximt6n4

Classifying human manipulation behavior

I. M. Bullock, A. M. Dollar
2011 2011 IEEE International Conference on Rehabilitation Robotics  
This hand-centric, motion-centric taxonomy differentiates tasks based on criteria such as object contact, prehension, and the nature of object motion relative to a hand frame.  ...  A sub-classification of the most dexterous categories, within-hand manipulation, is also presented, based on the principal axis of object rotation or translation in the hand frame.  ...  and Heidi Hong for helping to create the hand drawings used in the figures.  ... 
doi:10.1109/icorr.2011.5975408 pmid:22275611 fatcat:sfkvw2yujbbw5j3ncdsgajz3eu

Observing Human-Object Interactions: Using Spatial and Functional Compatibility for Recognition

A. Gupta, A. Kembhavi, L.S. Davis
2009 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Previous approaches to object and action recognition rely on static shape/appearance feature matching and motion analysis, respectively.  ...  It involves understanding scene/event, analyzing human movements, recognizing manipulable objects, and observing the effect of the human movement on those objects.  ...  The authors would like to thank Vitaladevuni Shiv and Mohamed Hussein for providing the code for reach motion detection and object detection, respectively.  ... 
doi:10.1109/tpami.2009.83 pmid:19696449 fatcat:inec6qwtbrgcfmdgiaggjmbx3q

Interpreting Manipulation Actions: From Language to Execution [chapter]

Bao-Anh Dang-Vu, Oliver Porges, Máximo A. Roa
2015 Advances in Intelligent Systems and Computing  
process), the extraction and consideration of task information and grasp constraints for solving the manipulation problem, and the use of an integrated grasp and motion planning that avoids relying on  ...  The proposed approach includes a clustering process that discriminates areas on the object that can be used for different types of tasks (therefore providing valuable information for the grasp planning  ...  The desired goal, the task constraints and the grasp restrictions are considered in an integrated grasp and motion planner that derives feasible trajectories for performing the manipulation action.  ... 
doi:10.1007/978-3-319-27146-0_14 fatcat:xx4vplxygnakbg5eoqmevhtwii

Understanding hand-object manipulation by modeling the contextual relationship between actions, grasp types and object attributes [article]

Minjie Cai, Kris Kitani, Yoichi Sato
2018 arXiv   pre-print
Specifically, we focus on recognizing hand grasp types, object attributes and manipulation actions within an unified framework by exploring their contextual relationships.  ...  In the proposed model, we explore various semantic relationships between actions, grasp types and object attributes, and show how the context can be used to boost the recognition of each component.  ...  The functional context models the functional constraints on grasp types and object attributes within different manipulation actions.  ... 
arXiv:1807.08254v1 fatcat:bikfkhkjvfbjfgng4tbv2kx2di

CNN-Based Hand Grasping Prediction and Control via Postural Synergy Basis Extraction

Quan Liu, Mengnan Li, Chaoyue Yin, Guoming Qian, Wei Meng, Qingsong Ai, Jiwei Hu
2022 Sensors  
A convolutional neural network (CNN)-based hand activity prediction method is proposed, which utilizes motion data to estimate hand grasping actions.  ...  The prediction accuracy of the proposed method for the selected hand motions could reach up to 94% and the robotic model could be operated naturally based on patient's movement intention, so as to complete  ...  The "Grasp" taxonomy is established on the following four bases: (1) according to the grasping force, as grasping actions can be divided into force type, fine type and the type between the first two;  ... 
doi:10.3390/s22030831 pmid:35161580 pmcid:PMC8838930 fatcat:ur4adire5zhlrogskbvltovsrm

Representation of manipulation-relevant object properties and actions for surprise-driven exploration

Susanne Petsch, Darius Burschka
2011 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems  
We propose a framework for the sensor-based estimation of manipulation-relevant object properties and the abstraction of known actions in a learning setup from the observation of humans.  ...  The descriptors consists of an objectcentric representation of manipulation constraints and a scenespecific action graph. The graph spans between the typical places, where objects are placed.  ...  Grasp Type: In the current implementation, grasp type is determined by manual labeling. III. RESULTS The proposed system is tested on sequences (seq.) of real human actions.  ... 
doi:10.1109/iros.2011.6094822 dblp:conf/iros/PetschB11 fatcat:locm55a7dbc4fiqmeizozixfjy

Representation of manipulation-relevant object properties and actions for surprise-driven exploration

S. Petsch, D. Burschka
2011 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems  
We propose a framework for the sensor-based estimation of manipulation-relevant object properties and the abstraction of known actions in a learning setup from the observation of humans.  ...  The descriptors consists of an objectcentric representation of manipulation constraints and a scenespecific action graph. The graph spans between the typical places, where objects are placed.  ...  Grasp Type: In the current implementation, grasp type is determined by manual labeling. III. RESULTS The proposed system is tested on sequences (seq.) of real human actions.  ... 
doi:10.1109/iros.2011.6048458 fatcat:moubr3kl4vb4rcphnqdfr3jvlq

Repairing plans for object finding in 3-D environments

J. Espinoza, R. Murrieta-Cid
2011 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems  
We propose a framework for the sensor-based estimation of manipulation-relevant object properties and the abstraction of known actions in a learning setup from the observation of humans.  ...  The descriptors consists of an objectcentric representation of manipulation constraints and a scenespecific action graph. The graph spans between the typical places, where objects are placed.  ...  Grasp Type: In the current implementation, grasp type is determined by manual labeling. III. RESULTS The proposed system is tested on sequences (seq.) of real human actions.  ... 
doi:10.1109/iros.2011.6048369 fatcat:mwgmvghcxbbm7apdvvo2tapxea

Classifying Human Hand Use and the Activities of Daily Living [chapter]

Aaron M. Dollar
2014 Springer Tracts in Advanced Robotics  
Finally, a taxonomy classifying hand-based manipulation is presented, providing a hand-centric and motion-centric categorization of hand use.  ...  Next, an overview of work related to classifications and taxonomies of static grasp types is presented, followed by a study investigating the frequency of use of various grasp types by a housekeeper and  ...  Acknowledgments The author would like to thank Ian Bullock, Josh Zheng, Sara De La Rosa, and Kayla Matheus for their work on the studies presented in this paper, Lael Odhner, Raymond Ma, and Leif Jentoft  ... 
doi:10.1007/978-3-319-03017-3_10 fatcat:nk6z22w5kvbuvggvpzkii4y74m
« Previous Showing results 1 — 15 out of 12,363 results