A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Filters
A multimodal dataset for object model learning from natural human-robot interaction
2017
2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Learning object models in the wild from natural human interactions is an essential ability for robots to perform general tasks. ...
In this paper we present a robocentric multimodal dataset addressing this key challenge. Our dataset focuses on interactions where the user teaches new objects to the robot in various ways. ...
Ciência e a Tecnologia (FCT) UID/CEC/50021/2013. ...
doi:10.1109/iros.2017.8206514
dblp:conf/iros/AzagraGMLCM17
fatcat:3qiwuybfvvd6fppilyurrgma3u
A MultiModal Social Robot Toward Personalized Emotion Interaction
[article]
2021
arXiv
pre-print
This study demonstrates a multimodal human-robot interaction (HRI) framework with reinforcement learning to enhance the robotic interaction policy and personalize emotional interaction for a human user ...
Moreover, the affective states of human users can be the indicator for the level of engagement and successful interaction, suitable for the robot to use as a rewarding factor to optimize robotic behaviors ...
Human-Robot Interaction Design A multimodal HRI system will be designed for evaluating the RL framework. ...
arXiv:2110.05186v1
fatcat:dtvb5mcv35glhco7dhydyapmqy
Introduction to the Special Issue on Machine Learning for Multiple Modalities in Interactive Systems and Robots
2014
ACM transactions on interactive intelligent systems (TiiS)
For example, a robot may coordinate its speech with its actions, taking into account (audio-) visual feedback during their execution. ...
Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. ...
ACKNOWLEDGMENTS We thank the chief editors of ACM TiiS, Anthony Jameson, John Riedl and Krzysztof Gajos, for letting this special issue become a reality and for their high commitment with this journal. ...
doi:10.1145/2670539
fatcat:mpwlonu2yfcnher33owzkl6j6y
Enabling Robots to Draw and Tell: Towards Visually Grounded Multimodal Description Generation
[article]
2021
arXiv
pre-print
Socially competent robots should be equipped with the ability to perceive the world that surrounds them and communicate about it in a human-like manner. ...
visual scenes and real life objects, namely, visually-grounded multimodal description generation. ...
Research in human-robot interaction has a long-standing interest in embodied multimodal interaction. ...
arXiv:2101.12338v1
fatcat:wnt2lpde55eebdzclg22xdn63e
SIGVerse: A cloud-based VR platform for research on social and embodied human-robot interaction
[article]
2020
arXiv
pre-print
The platform also contributes in providing a dataset of social behaviors, which would be a key aspect for intelligent service robots to acquire social interaction skills based on machine learning techniques ...
Humans have to perform several times over a long term to show embodied and social interaction behaviors to robots or learning systems. ...
Acknowledgments The authors would like to thank Hiroki Yamada to support the development of the cloud-based VR platform as a software technician. ...
arXiv:2005.00825v1
fatcat:ub77g6whcrg4jam44jd354lqly
Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality
[article]
2019
arXiv
pre-print
For human-robot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions. ...
In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movements, hand gestures, and speech. ...
We discuss what we learned from the analysis of a human study and how we see the future development of efficient and natural human-robot interaction in shared workspaces. ...
arXiv:1902.01117v1
fatcat:ogxbtqxbz5bqrm6gqsx33pi6ve
SIGVerse: A Cloud-Based VR Platform for Research on Multimodal Human-Robot Interaction
2021
Frontiers in Robotics and AI
Research on Human-Robot Interaction (HRI) requires the substantial consideration of an experimental design, as well as a significant amount of time to practice the subject experiment. ...
interface for robot/avatar teleoperations. ...
One of such challenges is the collection of a dataset for machine learning in HRI (Amershi et al., 2014) , which is required to learn and model human activities. ...
doi:10.3389/frobt.2021.549360
pmid:34136534
pmcid:PMC8202404
fatcat:5nibhsqggzhbbg2kfxw7hnf4oy
Symbol Emergence in Robotics: A Survey
[article]
2015
arXiv
pre-print
Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment ...
Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. ...
Modeling and recognizing a target object, as well as modeling a scene and segmenting objects from that scene, are important abilities for a robot in a realistic environment. ...
arXiv:1509.08973v1
fatcat:yg6bscvy2fdpdhapltyonvhs2a
Deep Learning for Tactile Understanding From Visual and Haptic Data
[article]
2016
arXiv
pre-print
Robots which interact with the physical world will benefit from a fine-grained tactile understanding of objects and surfaces. ...
Our models take advantage of recent advances in deep neural networks by employing a unified approach to learning features for physical interaction and visual observations. ...
ACKNOWLEDGMENT We would like to thank Jeff Donahue for advice and guidance during the initial stages of the experiments, as well as for useful discussions on deep models. ...
arXiv:1511.06065v2
fatcat:glnc3odxxvbzbnm2n7bbb4qo3i
Object Permanence Through Audio-Visual Representations
[article]
2021
arXiv
pre-print
In particular, we developed a multimodal neural network model-using a partial, observed bounce trajectory and the audio resulting from drop impact as its inputs-to predict the full bounce trajectory and ...
As robots perform manipulation tasks and interact with objects, it is probable that they accidentally drop objects that subsequently bounce out of their visual fields (e.g., due to an inadequate grasp ...
ACKNOWLEDGMENTS We would like to thank Gopika Ajaykumar for proofreading this paper and the Johns Hopkins University for supporting this work. ...
arXiv:2010.09948v2
fatcat:mno64mjzifd65hpsfwmm3za5cq
Multimodal Probabilistic Model-Based Planning for Human-Robot Interaction
[article]
2017
arXiv
pre-print
Our approach is to learn multimodal probability distributions over future human actions from a dataset of human-human exemplars and perform real-time robot policy construction in the resulting environment ...
This paper presents a method for constructing human-robot interaction policies in settings where multimodality, i.e., the possibility of multiple highly distinct futures, plays a critical role in decision ...
Our policy is validated on the same simulator for pairwise human-robot traffic weaving interactions. paper is to devise a data-driven framework for HRI that leverages learned multimodal human action distributions ...
arXiv:1710.09483v1
fatcat:445sn4pvcffb7nfixbicnbglzu
Learning-based modeling of multimodal behaviors for humanlike robots
2014
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI '14
The evaluation of this approach in a human-robot interaction study shows that this learning-based approach is comparable to conventional modeling approaches in enabling effective robot behaviors while ...
We discuss the implications of this approach for designing natural, effective multimodal robot behaviors. ...
Multimodal Behaviors in Robots Previous research in human-robot interaction has explored the development of mechanisms for achieving natural and effective multimodal behaviors for robots, such as the development ...
doi:10.1145/2559636.2559668
dblp:conf/hri/0001M14
fatcat:adh2amzv6vaazg7wi6i3nkatbu
Behavior and usability analysis for multimodal user interfaces
2021
Journal on Multimodal User Interfaces
Multimodal interfaces offer ever-changing tasks and challenges for designers to accommodate newer technologies, and as these technologies become more accessible, newer application scenarios emerge. ...
skills, and this special issue is a reflection of that collective effort. ...
Ince et al. developed a drum-playing game for multimodal human-robot interaction using audio-visual cues. ...
doi:10.1007/s12193-021-00372-0
fatcat:pappj7oc7jfsxcy5mrqg3mnj2e
Reshaping Robot Trajectories Using Natural Language Commands: A Study of Multi-Modal Data Alignment Using Transformers
[article]
2022
arXiv
pre-print
We train the model using imitation learning over a dataset containing robot trajectories modified by language commands, and treat the trajectory generation process as a sequence prediction problem, analogously ...
In this work, we provide a flexible language-based interface for human-robot collaboration, which allows a user to reshape existing trajectories for an autonomous agent. ...
ACKNOWLEDGMENTS AB gratefully acknowledges the support from TUM-MIRMI. ...
arXiv:2203.13411v1
fatcat:zy2dakxlfbb3tfciamk7zz6hza
Object Permanence Through Audio-Visual Representations
2021
IEEE Access
Our results contribute to enabling object permanence for robots and error recovery from object drops. ...
In particular, we developed a multimodal neural network model-using a partial, observed bounce trajectory and the audio resulting from drop impact as its inputs-to predict the full bounce trajectory and ...
ACKNOWLEDGMENTS We would like to thank the Johns Hopkins University Institute for Assured Autonomy for supporting this work. ...
doi:10.1109/access.2021.3115082
fatcat:4y43k4l3vbgptcojoowlnbkryu
« Previous
Showing results 1 — 15 out of 4,934 results