A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Semantic-Based Explainable AI: Leveraging Semantic Scene Graphs and Pairwise Ranking to Explain Robot Failures
[article]
2021
arXiv
pre-print
Our framework autonomously captures the semantic information in a scene to produce semantically descriptive explanations for everyday users. ...
Existing natural language explanations hand-annotate contextual information from an environment to help everyday people understand robot failures. ...
Within the robotics community, scene graphs have been utilized for scene analysis and goal-directed manipulation. ...
arXiv:2108.03554v1
fatcat:xv6a3mkenfe4jf35ph2fqht5by
Commonsense Scene Semantics for Cognitive Robotics: Towards Grounding Embodied Visuo-Locomotive Interactions
2017
2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
We present a commonsense, qualitative model for the semantic grounding of embodied visuo-spatial and locomotive interactions. ...
The key contribution is an integrative methodology combining low-level visual processing with high-level, human-centred representations of space and motion rooted in artificial intelligence. ...
We also acknowledge the support of Omar Moussa, Thomas Hudkovic, and Vijayanta Jain in preparation of parts of the overall activity dataset. ...
doi:10.1109/iccvw.2017.93
dblp:conf/iccvw/SuchanB17
fatcat:y6hjq7mtjfgxvfg4bcuhzpb73y
Special Issue on Assistive Computer Vision and Robotics - ``Assistive Solutions for Mobility, Communication and HMI''
2016
Computer Vision and Image Understanding
, feature extraction, tracking, 3D morphometric analysis. ...
Assistive technologies provide a set of advanced tools that can improve the quality of life not only for impaired people, patients and elderly but also for healthy people struggling with everyday actions ...
doi:10.1016/j.cviu.2016.05.014
fatcat:ek3uupvcvvfqdh6u6chla56w3q
Commonsense Scene Semantics for Cognitive Robotics: Towards Grounding Embodied Visuo-Locomotive Interactions
[article]
2017
arXiv
pre-print
We present a commonsense, qualitative model for the semantic grounding of embodied visuo-spatial and locomotive interactions. ...
The key contribution is an integrative methodology combining low-level visual processing with high-level, human-centred representations of space and motion rooted in artificial intelligence. ...
We also acknowledge the support of Omar Moussa, Thomas Hudkovic, and Vijayanta Jain in preparation of parts of the overall activity dataset. ...
arXiv:1709.05293v1
fatcat:5ll7cn2s6bahpfcjjiuxs3hrfi
Empathy
2014
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14
The role of human-robot interaction is becoming more important as everyday robotic devices begin to permeate into our lives. ...
In this study, we video-prototyped a user's interactions with a set of robotic drawers. The user and robot each displayed one of five emotional states -angry, happy, indifferent, sad, and timid. ...
INTRODUCTION Autonomous robots have begun to permeate into various facets of our everyday lives. ...
doi:10.1145/2559636.2563720
dblp:conf/hri/MokYSJ14
fatcat:padmzyocsvcmnpi7awu6zfeuvu
Special issue on Assistive Computer Vision and Robotics - Part I
2016
Computer Vision and Image Understanding
Wang et al. provides a method for the characterization of everyday activities from egocentric images. ...
From the trajectory of the fingertip, the written character is localised and recognised simultaneously. The paper "Geodesic pixel neighborhoods for 2D and 3D scene understanding" by V. ...
doi:10.1016/j.cviu.2016.05.010
fatcat:tiiosvi5lbagnecmyekiz2p57m
RoboSherlock: Cognition-enabled Robot Perception for Everyday Manipulation Tasks
[article]
2019
arXiv
pre-print
We present RoboSherlock, a knowledge-enabled cognitive perception systems for mobile robots performing human-scale everyday manipulation tasks. ...
We demonstrate the potential of the proposed framework through feasibility studies of systems for real-world scene perception that have been built on top of the framework. ...
Perception for Everyday Manipulation tasks Everyday robot manipulation tasks usually take a considerable amount of time to execute, are in some sense repetitive in their nature and require interaction ...
arXiv:1911.10079v1
fatcat:hnude4inmfcs5ncfwgjclkftam
Combining perception and knowledge processing for everyday manipulation
2010
2010 IEEE/RSJ International Conference on Intelligent Robots and Systems
and scenes and to infer answers to complex queries that require the combination of perception and knowledge processing. ...
Key features of K-COPMAN are that it can make a robot environment-aware and that it supports goal-directed as well as passive perceptual processing. ...
INTRODUCTION Autonomous robots performing everyday manipulation tasks have to make many decisions that require the combination of perception and knowledge processing. ...
doi:10.1109/iros.2010.5651006
dblp:conf/iros/PangercicTJB10
fatcat:btzdhydhtfdlhmfmostcj5uq5i
Humanoid Cognitive Robots
2005
Journal of the Robotics Society of Japan
Acknowledgements The work of the ...
It serves the robot to identify important objects in a static scene and to allow a closer analysis of the identified objects. ...
, in particular a microphone array for a spatial analysis of the acoustic scene and a colour stereo camera system. ...
doi:10.7210/jrsj.23.517
fatcat:x4ft3by6tncm3oa5znqrsuuyau
The role of culture in comics of the quotidian
2015
Journal of Graphic Novels and Comics
The analysis points to the conclusion that the culture of the world inside comics must be accounted for in most any attempt to understand the quotidian in comics. ...
This is also found in studies of literature, art and economics. The premise of the quotidian, however, must be examined through a lens of culture. ...
Thanks to Julie Pelton for pointing me toward Swidler's work on culture. 1 appreciate the very helpful comments of Nina Mickwitz on notions of the everyday, though any errors remain mine. ...
doi:10.1080/21504857.2014.1002853
fatcat:otbvyxinkzca7hilb2mmfkoeje
TrimBot2020: an outdoor robot for automatic gardening
[article]
2018
arXiv
pre-print
Robots are increasingly present in modern industry and also in everyday life. ...
Autonomous lawn mowers are succesful market applications of gardening robotics. ...
For instance, analysis of color images gives information about the type of of objects (e.g. bushes, roses, trees, hedges, etc.) present in the scene. ...
arXiv:1804.01792v2
fatcat:z444vmgoxzbebl4iawy6dop6m4
Eccentricity edge-graphs from HDR images for object recognition by humanoid robots
2010
2010 10th IEEE-RAS International Conference on Humanoid Robots
Experimental evaluation with the humanoid robot ARMAR-III is presented. ...
The approach acquires accurate high dynamic range images to properly capture complex heterogeneously lighted scenes. ...
Avoiding the ubiquitous high-contrast image-content in everyday humanoid robot applications is not plausible. ...
doi:10.1109/ichr.2010.5686336
dblp:conf/humanoids/Gonzalez-AguirreAD10
fatcat:f6deefcnqzb3rmfbzefia3pmdq
Editorial: Computational Approaches for Human-Human and Human-Robot Social Interactions
2020
Frontiers in Robotics and AI
Lammers et al. present a dataset of everyday actions expressing various emotions. ...
Automatized detection and analysis of non-verbal social signals can be of particular relevance not only to human-human interaction (HHI) but also in human-robot interaction (HRI). ...
doi:10.3389/frobt.2020.00055
pmid:33501223
pmcid:PMC7805690
fatcat:ewd7twmq5zamzapnxazm5sfpci
Social behavior recognition using body posture and head pose for human-robot interaction
2012
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems
Robots that interact with humans in everyday situations, need to be able to interpret the nonverbal social cues of their human interaction partners. ...
With this model, the bartender robot of the project JAMES can recognize typical social behaviors of human customers. ...
Fig. 5 shows two scenes with labeled and recognized states in comparison; one of a participant in a group, one of a single participant interacting with the robot. ...
doi:10.1109/iros.2012.6385460
dblp:conf/iros/GaschlerJGHRK12
fatcat:7pecsokjyne75am6atcd7vxooe
Cognitive Interpretation of Everyday Activities: Toward Perceptual Narrative Based Visuo-Spatial Scene Interpretation
[article]
2013
arXiv
pre-print
ACM Classification: I.2 Artificial Intelligence: I.2.0 General -- Cognitive Simulation, I.2.4 Knowledge Representation Formalisms and Methods, I.2.10 Vision and Scene Understanding: Architecture and control ...
structures, Motion, Perceptual reasoning, Shape, Video analysis General keywords: cognitive systems; human-computer interaction; spatial cognition and computation; commonsense reasoning; spatial and temporal ...
., logistical processes, activities of everyday living) of the environment being modelled. ...
arXiv:1306.5308v1
fatcat:jxqphtmqrjaz3i77dvgeiklwoq
« Previous
Showing results 1 — 15 out of 10,723 results