Filters








10 Hits in 4.4 sec

The vernissage corpus: A conversational Human-Robot-Interaction dataset

Dinesh Babu Jayagopi, Samira Sheiki, David Klotz, Johannes Wienke, Jean-Marc Odobez, Sebastien Wrede, Vasil Khalidov, Laurent Nyugen, Britta Wrede, Daniel Gatica-Perez
2013 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI)  
We introduce a new conversational Human-Robot-Interaction (HRI) dataset with a real-behaving robot inducing interactive behavior with and between humans.  ...  As perceiving nonverbal cues, apart from the spoken words, plays a major role in social interactions and socially-interactive robots, we have extensively annotated the dataset.  ...  Acknowledgment: This research was funded by the EU HUMAVIPS project.  ... 
doi:10.1109/hri.2013.6483545 fatcat:66dbr62yhbfq7bfgjacevz6pb4

Given that, should i respond? Contextual addressee estimation in multi-party human-robot interactions

Dinesh Babu Jayagopi, Jean-Marc Odobez
2013 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI)  
For this study, we use 11 interactions with a humanoid robot NAO 1 giving quiz to two human participants.  ...  For every utterance from a human participant, the robot should know if it was being addressed or not, so as to respond and behave accordingly.  ...  Acknowledgment: This research was funded by the EU HUMAVIPS project. The authors would like to thank Daniel Gatica-Perez for useful discussions.  ... 
doi:10.1109/hri.2013.6483544 fatcat:sjii5jqnn5gfzjnrzc3cb3k6xy

Deep Learning Based Multi-modal Addressee Recognition in Visual Scenes with Utterances

Thao Le Minh, Nobuyuki Shimizu, Takashi Miyazaki, Koichi Shinoda
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
We also propose a multi-modal deep-learning-based model that takes different human cues, specifically eye gazes and transcripts of an utterance corpus, into account to predict the conversational addressee  ...  With the widespread use of intelligent systems, such as smart speakers, addressee recognition has become a concern in human-computer interaction, as more and more people expect such systems to understand  ...  Acknowledgments We thank the anonymous reviewers for their careful reading of our manuscript and their many insightful comments and suggestions.  ... 
doi:10.24963/ijcai.2018/214 dblp:conf/ijcai/MinhSMS18 fatcat:2f2wrlqnvfcd7mva3uqcb3pfk4

Multimodal Human-Human-Robot Interactions (MHHRI) Dataset for Studying Personality and Engagement

Oya Celiktutan, Efstratios Skordos, Hatice Gunes
2017 IEEE Transactions on Affective Computing  
In this paper we introduce a novel dataset, the Multimodal Human-Human-Robot-Interactions (MHHRI) dataset, with the aim of studying personality simultaneously in human-human interactions (HHI) and human-robot  ...  Index Terms-Multimodal interaction dataset, human-human interaction, human-robot interaction, personality analysis, engagement classification, benchmarking ! • O. Celiktutan and H.  ...  Vernissage corpus was collected by Jayagopi et al. [25] , comprising interactions of two participants with a humanoid robot.  ... 
doi:10.1109/taffc.2017.2737019 fatcat:vpaeaqp2nvdrbkncl6qga2tv4q

Learning Robot Speech Models to Predict Speech Acts in HRI

Ankuj Arora, Humbert Fiorino, Damien Pellier, Sylvie Pesty
2016 Paladyn: Journal of Behavioral Robotics  
This paper proposes a technique to learn robot speech models from human-robot dialog exchanges.  ...  This motivates a long term necessity of introducing behavioral autonomy in robots, so they can autonomously communicate with humans without the need of "wizard" intervention.  ...  These dialogues are taken from the Vernissage corpus which is a multimodal HRI dataset [4] , and has a total of 10 conversation instances between Nao and human participants.  ... 
doi:10.1515/pjbr-2018-0015 fatcat:3crrxedixfdwrctopmnsbxnxqu

Social Agents for Teamwork and Group Interactions (Dagstuhl Seminar 19411)

Elisabeth André, Ana Paiva, Julie Shah, Selma ŠAbanovic, Michael Wagner
2020 Dagstuhl Reports  
It summarises the three talks that were held during the seminar on three different perspectives: the impact of robots in human teamwork, mechanisms to support group interactions in virtual settings, and  ...  affect analysis in human-robot group settings.  ...  However, if the same robot were equipped to understand human conversation and avoid vacuuming when a conversation is taking place, it would then be considered social.  ... 
doi:10.4230/dagrep.9.10.1 dblp:journals/dagstuhl-reports/AndrePSS19 fatcat:qs2mrcot7jh3jlwvmgldbrcva4

Demonstrating and Learning Multimodal Socio-communicative Behaviors for HRI: Building Interactive Models from Immersive Teleoperation Data

Gérard Bailly, Frédéric Elisei
2018 FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction   unpublished
Collecting and modeling multimodal interactive data is thus a major issue for fostering AI for HRI.  ...  We then analyze the benefits and challenges raised by using immersive teleoperation for endowing humanoid robots with such skills.  ...  So the Vernissage corpus [Jayagopi et al., 2013] comprises multiple auditory, visual, and robotic system information channels from the Nao robot while interacting with two persons as an art guide in  ... 
doi:10.21437/ai-mhri.2018-10 fatcat:4dbp6fw5bjh6xfxnzdecuwbocm

Multiple-Gaze Geometry: Inferring Novel 3D Locations from Gazes Observed in Monocular Video [chapter]

Ernesto Brau, Jinyan Guan, Tanya Jeffries, Kobus Barnard
2018 Lecture Notes in Computer Science  
Conversely, knowing 3D locations of scene elements that draw visual attention, such as other people in the scene, can help infer gaze direction.  ...  As existing data sets do not provide the 3D locations of what people are looking at, we contribute a small data set that does.  ...  Another application of estimating VFoA is human-robot interaction scenarios, which involves both person-to-person and robot-to-person interactions [36, 47, 67] .  ... 
doi:10.1007/978-3-030-01225-0_38 fatcat:dg6tki5svbabbk4ocinburm6ge

Dagstuhl Reports, Volume 9, Issue 10, October 2019, Complete Issue

2020
Secondly, the talk introduces a novel dataset we have collected, the Multimodal Human-Human-Robot-Interactions (MHHRI) dataset [2] , acquired with the aim of studying personality simultaneously in human-human  ...  However, if the same robot were equipped to understand human conversation and avoid vacuuming when a conversation is taking place, it would then be considered social.  ...  The class of temporal networks form a subclass of the tree-child networks. A network is tree-child if every non-leaf vertex has a child that is not a reticulation.  ... 
doi:10.4230/dagrep.9.10 fatcat:4dvf4zjt4nhafhwb4ot23au3ua

The Attention-Hesitation Model. A Non-Intrusive Intervention Strategy for Incremental Smart Home Dialogue Management

Birte Richter
2021
Therefore, I investigate whether it is possible to use system hesitations, based on the attention of the human interaction partner, as a communicative act for dialogue coordination in HAI within a smart-home  ...  In the first part, I develop a model which allows the dialogue management to incorporate the human attention: the Attention-Hesitation Model (AHM).  ...  First, the dataset is not recorded in a real interaction. The recorded interaction itself simulated the position of the viewer/robot with a camera.  ... 
doi:10.4119/unibi/2959410 fatcat:j7i7zms6abaxnn4athsxyv7vo4