How people talk when teaching a robot

Elizabeth S. Kim, Dan Leyzberg, Katherine M. Tsui, Brian Scassellati
2009 Proceedings of the 4th ACM/IEEE international conference on Human robot interaction - HRI '09  
We examine affective vocalizations provided by human teachers to robotic learners. In unscripted one-on-one interactions, participants provided vocal input to a robotic dinosaur as the robot selected toy buildings to knock down. We find that (1) people vary their vocal input depending on the learner's performance history, (2) people do not wait until a robotic learner completes an action before they provide input and (3) people naively and spontaneously use intensely affective vocalizations.
more » ... findings suggest modifications may be needed to traditional machine learning models to better fit observed human tendencies. Our observations of human behavior contradict the popular assumptions made by machine learning algorithms (in particular, reinforcement learning) that the reward function is stationary and pathindependent for social learning interactions. We also propose an interaction taxonomy that describes three phases of a human-teacher's vocalizations: direction, spoken before an action is taken; guidance, spoken as the learner communicates an intended action; and feedback, spoken in response to a completed action.
doi:10.1145/1514095.1514102 dblp:conf/hri/KimLTS09 fatcat:4jeumh235rbk5jtbmi2l76kw3i