Filters








88,215 Hits in 2.9 sec

Recognizing Object Affordances to Support Scene Reasoning for Manipulation Tasks [article]

Fu-Jen Chu, Ruinian Xu, Chao Tang, Patricio A. Vela
2020 arXiv   pre-print
Thus, integrating affordance-based reasoning into symbolic action plannning pipelines would enhance the flexibility of robot manipulation.  ...  Additionally, task-oriented grasping for cutting and pounding actions demonstrate the exploitation of multiple affordances for a given object to complete specified tasks.  ...  We tackle simple, tasklevel reasoning for manipulation based on symbolic object and action knowledge extracted from visual recognition algorithms. Symbolic Reasoning for Manipulation.  ... 
arXiv:1909.05770v2 fatcat:qzinrh63srg2pdhfuu42liiqlm

Learning to Act Properly: Predicting and Explaining Affordances from Images [article]

Ching-Yao Chuang, Jiaman Li, Antonio Torralba, Sanja Fidler
2018 arXiv   pre-print
We address the problem of affordance reasoning in diverse scenes that appear in the real world. Affordances relate the agent's actions to their effects when taken on the surrounding objects.  ...  In our work, we take the egocentric view of the scene, and aim to reason about action-object affordances that respect both the physical world as well as the social norms imposed by the society.  ...  Related Work We review works most related to ours, focusing on affordances, visual reasoning, and captioning. Affordance Reasoning.  ... 
arXiv:1712.07576v2 fatcat:tfmifkjzlndnjnobskksacjy5u

Learning to Act Properly: Predicting and Explaining Affordances from Images

Ching-Yao Chuang, Jiaman Li, Antonio Torralba, Sanja Fidler
2018 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition  
Related Work We review works most related to ours, focusing on affordances, visual reasoning, and captioning. Affordance Reasoning.  ...  Affordance Reasoning with Graph Neural Models In this section, we propose a model to perform visual reasoning about action-object affordances from images.  ... 
doi:10.1109/cvpr.2018.00108 dblp:conf/cvpr/ChuangL0F18 fatcat:zjzjedxb2vgzvpxnqpned4eteq

TOWARDS GRASP-ORIENTED VISUAL PERCEPTION FOR HUMANOID ROBOTS

JEANNETTE BOHG, CARL BARCK-HOLST, KAI HUEBNER, MARIA RALPH, BABAK RASOLZADEH, DAN SONG, DANICA KRAGIC
2009 International Journal of Humanoid Robotics  
The perception-action cycle is connected to the reasoning system based on the idea of affordances.  ...  This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real  ...  Acknowledgments This work was supported by EU through the projects PACO-PLUS, IST-FP6-IP-027657 and GRASP, IST-FP7-IP-215821.  ... 
doi:10.1142/s0219843609001796 fatcat:ns4e6dn7i5fp3pzhozutfxccsy

Beyond the Self: Using Grounded Affordances to Interpret and Describe Others' Actions

Giovanni Saponaro, Lorenzo Jamone, Alexandre Bernardino, Giampiero Salvi
2019 IEEE Transactions on Cognitive and Developmental Systems  
In our experiments, we show that the model can be used flexibly to do inference on different aspects of the scene. We can predict the effects of an action on the basis of object properties.  ...  The robot first learns the association between words and object affordances by manipulating the objects in its environment.  ...  In particular, the model can do reasoning on the elements that constitute our computational concept of affordances, i.e., action, object features, and effects in Fig. 2 .  ... 
doi:10.1109/tcds.2018.2882140 fatcat:xdmgnfx6xvhpnhbywozboobkvy

Self-Assessment of Grasp Affordance Transfer

Paola Ardon, Eric Pairet, Yvan Petillot, Ronald P. A. Petrick, Subramanian Ramamoorthy, Katrin S. Lohan
2020 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)  
Reasoning about object grasp affordances allows an autonomous agent to estimate the most suitable grasp to execute a task.  ...  In this work, we present a pipeline for selfassessment of grasp affordance transfer (SAGAT) based on prior experiences.  ...  Lohan Abstract-Reasoning about object grasp affordances allows an autonomous agent to estimate the most suitable grasp to execute a task.  ... 
doi:10.1109/iros45743.2020.9340841 fatcat:vl3nl2u72rdmtheqq3ezypt5au

Experimental Evaluation of a Perceptual Pipeline for Hierarchical Affordance Extraction [chapter]

Peter Kaiser, Eren E. Aksoy, Markus Grotz, Dimitrios Kanoulas, Nikos G. Tsagarakis, Tamim Asfour
2017 Springer Proceedings in Advanced Robotics  
In our previous work we developed a perceptual pipeline for the extraction of affordances for loco-manipulation actions based on a simplified representation of the environment starting from RGB-D camera  ...  The overall goal of the perceptual pipeline is to provide a robust and reliable perceptual mechanism for affordance-based action execution.  ...  We particularly focus on platform grasps and prismatic grasps as we think that these two grasp types are predominant for the considered set of actions.  ... 
doi:10.1007/978-3-319-50115-4_13 dblp:conf/iser/KaiserAGKTA16 fatcat:h7konxiohzgd7lngyppfgsjaxe

Relational Affordance Learning for Task-Dependent Robot Grasping [chapter]

Laura Antanas, Anton Dries, Plinio Moreno, Luc De Raedt
2018 Lecture Notes in Computer Science  
Object-task affordances facilitate semantic reasoning about pre-grasp configurations with respect to the intended tasks, favoring good grasps.  ...  Robot grasping depends on the specific manipulation scenario: the object, its properties, task and grasp constraints.  ...  Related work Much recent work focuses on incorporating task constraints in robot grasping by learning a direct mapping function between good grasps and various constraints (on actions and geometry), action  ... 
doi:10.1007/978-3-319-78090-0_1 fatcat:l57ry5go7jgrrdp4kszvlvy2wm

GIFT: Generalizable Interaction-aware Functional Tool Affordances without Labels [article]

Dylan Turpin, Liquan Wang, Stavros Tsogkas, Sven Dickinson, Animesh Garg
2021 arXiv   pre-print
Tool use requires reasoning about the fit between an object's affordances and the demands of a task.  ...  of the discovered affordances on novel tools in a self-supervised fashion.  ...  This simple thought experiment provides an example of reasoning about what Gibson called affordances: the action possibilities offered by an object [9, 10] .  ... 
arXiv:2106.14973v1 fatcat:g4jzc6pnzrcclmfzwpimtuja5a

Self-Assessment of Grasp Affordance Transfer [article]

Paola Ardón, Èric Pairet, Ronald P. A. Petrick, Subramanian Ramamoorthy, Katrin S. Lohan
2020 arXiv   pre-print
Reasoning about object grasp affordances allows an autonomous agent to estimate the most suitable grasp to execute a task.  ...  In this work, we present a pipeline for SAGAT based on prior experiences. We visually detect a grasp affordance region to extract multiple grasp affordance configuration candidates.  ...  The proposed approach unifies grasp affordance reasoning and task deployment in a self-assessed system that, without the need for extensive prior experiences, is able to transfer grasp affordance configurations  ... 
arXiv:2007.02132v1 fatcat:fb45gbb36zcxfnbtby64cum5bq

Choosing informative actions for manipulation tasks

Shiraj Sen, Grant Sherrick, Dirk Ruiken, Rod Grupen
2011 2011 11th IEEE-RAS International Conference on Humanoid Robots  
With this goal in mind, we present a knowledge representation that makes explicit the invariant spatial relationships between sensorimotor features comprising a rigid body and uses them to reason about  ...  the aspect to one that affords grasping.  ...  Fig. 4 . 4 The robot performing a top grasp on the object and placing it on the goal.  ... 
doi:10.1109/humanoids.2011.6100911 dblp:conf/humanoids/SenSRG11 fatcat:jv3h6lzxsbeqhg7rb56nmyegdq

Affordance processing in segregated parieto-frontal dorsal stream sub-pathways

Katrin Sakreida, Isabel Effnert, Serge Thill, Mareike M. Menz, Doreen Jirak, Claudia R. Eickhoff, Tom Ziemke, Simon B. Eickhoff, Anna M. Borghi, Ferdinand Binkofski
2016 Neuroscience and Biobehavioral Reviews  
Affordances relate to both perception and action and refer to sensory-motor processes emerging from goal-directed object interaction.  ...  The concept of affordances indicates "action possibilities" as characterized by object properties the environment provides to interacting organisms.  ...  Acknowledgements We would notably like to thank Antonio Pellicano for carefully evaluating the publications on affordances.  ... 
doi:10.1016/j.neubiorev.2016.07.032 pmid:27484872 fatcat:sxitpx626vhuxkr7ajcjj3xsji

Homogeneity analysis for object-action relation reasoning in kitchen scenarios

Hanchen Xiong, Sandor Szedmak, Justus Piater
2013 Proceedings of the 2nd Workshop on Machine Learning for Interactive Systems Bridging the Gap Between Perception, Action and Communication - MLIS '13  
The model is evaluated on a dataset of objects and actions in a kitchen scenario, and the experimental results illustrate that the proposed model yields semantically reasonable interpretation of object-action  ...  , the effects of different actions on an unseen object can be inferred in a data-driven way.  ...  However, most previous studies are limited to one isolated object affordance (e.g. grasping). In some cases, multiple objects are involved and interact with each other within one manipulation.  ... 
doi:10.1145/2493525.2493532 dblp:conf/ijcai/XiongSP13 fatcat:wl2lhmvn6bc43moz6ls3m4qbse

Deep Affordance Foresight: Planning Through What Can Be Done in the Future [article]

Danfei Xu, Ajay Mandlekar, Roberto Martín-Martín, Yuke Zhu, Silvio Savarese, Li Fei-Fei
2021 arXiv   pre-print
In this paper, we introduce a new affordance representation that enables the robot to reason about the long-term effects of actions through modeling what actions are afforded in the future, thereby informing  ...  Based on the new representation, we develop a learning-to-plan method, Deep Affordance Foresight (DAF), that learns partial environment models of affordances of parameterized motor skills through trial-and-error  ...  The ability to reason about what actions are possible in a given situation is commonly studied through affordances.  ... 
arXiv:2011.08424v2 fatcat:pfvph6tt4zajngnmrtirhl4ioa

Validation of whole-body loco-manipulation affordances for pushability and liftability

Peter Kaiser, Markus Grotz, Eren E. Aksoy, Martin Do, Nikolaus Vahrenkamp, Tamim Asfour
2015 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids)  
Based on our previous work we propose to apply the concept of affordances to actions of stable whole-body locomanipulation, in particular to pushing and lifting of large objects.  ...  Affordances that refer to wholebody actions are especially valuable for humanoid robots as the necessity of stabilization is an integral part of their control strategies.  ...  Affordances are strongly related to Object Action Complexes (OACs) [15] , a framework for the representation of sensorimotor experience and behaviors based on the coupling of objects and actions.  ... 
doi:10.1109/humanoids.2015.7363471 dblp:conf/humanoids/KaiserGADVA15 fatcat:6dyuusvbxzgklgd2bikkw6nl2m
« Previous Showing results 1 — 15 out of 88,215 results