Filters








2,752 Hits in 14.4 sec

Guiding Interaction Behaviors for Multi-modal Grounded Language Learning

Jesse Thomason, Jivko Sinapov, Raymond Mooney
2017 Proceedings of the First Workshop on Language Grounding for Robotics  
Multi-modal grounded language learning connects language predicates to physical properties of objects in the world.  ...  Previous work has established that grounding in multi-modal space improves performance on object retrieval from human descriptions.  ...  Acknowledgments We thank our anonymous reviewers for their time and insights.  ... 
doi:10.18653/v1/w17-2803 dblp:conf/acl/ThomasonSM17 fatcat:jiiw4ip23vbz5cdsbiq2kp352e

Embodied Songs: Insights Into the Nature of Cross-Modal Meaning-Making Within Sign Language Informed, Embodied Interpretations of Vocal Music

Vicky J. Fisher
2021 Frontiers in Psychology  
This exploratory survey offers insights into the nature of cross-modal and embodied meaning-making, as a jumping-off point for further research.  ...  This paper draws together a range of pertinent theories from within a grounded cognition framework including semiotics, analogy mapping and cross-modal correspondences.  ...  images of them in action, in the figures.  ... 
doi:10.3389/fpsyg.2021.624689 pmid:34744850 pmcid:PMC8569319 fatcat:syseshpzhvbydlnfa3hujo5jqm

Gaussian process decentralized data fusion meets transfer learning in large-scale distributed cooperative perception

Ruofei Ouyang, Bryan Kian Hsiang Low
2019 Autonomous Robots  
Narahari Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions Jesse Thomason*, Jivko Sinapov, Raymond Mooney, Peter Stone Guiding Search in Continuous State-action Spaces  ...  Improved Description of Complex Scenes Ashwin Vijayakumar*, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, Dhruv Batra Diverse Exploration for Fast and Safe Policy Improvement  ... 
doi:10.1007/s10514-018-09826-z fatcat:67yqhwmgozccxni56rxmuapjgm

SocialAI: Benchmarking Socio-Cognitive Abilities in Deep Reinforcement Learning Agents [article]

Grgur Kovač, Rémy Portelas, Katja Hofmann, Pierre-Yves Oudeyer
2021 arXiv   pre-print
Building embodied autonomous agents capable of participating in social interactions with humans is one of the main challenges in AI.  ...  To do this, we present SocialAI, a benchmark to assess the acquisition of social skills of DRL agents using multiple grid-world environments featuring other (scripted) social agents.  ...  All presented experiments were carried out using both A) the computing facilities MCIA (Mésocentre de Calcul Intensif Aquitain) of the Université de Bordeaux and of the Université de Pau et des Pays de  ... 
arXiv:2107.00956v3 fatcat:6jyi3eivtfctbl2vl66se2jy3q

Building Human-like Communicative Intelligence: A Grounded Perspective

Marina Dubova
2021 Cognitive Systems Research  
I then use this analysis to propose a list of concrete, implementable components for building "grounded" linguistic intelligence.  ...  I review results on 4E research lines in Cognitive Science to distinguish the main aspects of naturalistic learning conditions that play causal roles for human language development.  ...  ACKNOWLEDGEMENTS I would like to thank Justin Wood and Robert Goldstone for their extremely helpful feedback at every stage of the preparation of this manuscript.  ... 
doi:10.1016/j.cogsys.2021.12.002 fatcat:c2dwpl43dba2dduhmbm4fg2ayy

Approaches for assessing communication in human-autonomy teams

Anthony L. Baker, Sean M. Fitzhugh, Lixiao Huang, Daniel E. Forster, Angelique Scharine, Catherine Neubauer, Glenn Lematta, Shawaiz Bhatti, Craig J. Johnson, Andrea Krausman, Eric Holder, Kristin E. Schaefer (+1 others)
2021 Human-Intelligent Systems Integration  
It is not possible to identify all approaches for all situations, though the following seem to generalize and support multi-size teams and a variety of military operations.  ...  Future research directions describe four critical areas for further study of communication in human-autonomy teams.  ...  Metcalfe for feedback on revisions to the manuscript.  ... 
doi:10.1007/s42454-021-00026-2 fatcat:br5vzqgt5fb7dataceijshlpcu

The curious robot - Structuring interactive robot learning

I. Lutkebohle, J. Peltason, L. Schillingmann, B. Wrede, S. Wachsmuth, C. Elbrechter, R. Haschke
2009 2009 IEEE International Conference on Robotics and Automation  
Thereby, we specifically target untrained users, who are supported by mixed-initiative interaction using verbal and non-verbal modalities.  ...  To improve such humanrobot interaction, a system is presented that provides dialog structure and engages the human in an exploratory teaching scenario.  ...  Explorative behaviors based on multi-modal salience have recently been explored by Ruesch et al to control the gaze of the iCub robot [13] .  ... 
doi:10.1109/robot.2009.5152521 dblp:conf/icra/LutkebohlePSWWEH09 fatcat:jv44rla7vvghhdhyosb2iwmtnq

Homo inventans: The evolution of narrativity

Lynda D. McNeil
1996 Language & Communication  
I would like to replace the model of a discontinuous, unprecedented cultural expression with a view of underlying continuity from multi-modal prenarrative (nonhuman primate, hominid; Homo habilis), multi-modal  ...  At least by Homo habilis, hominid language was probably multi-modal, combining gestural, iconic, and some vocal modalities of expression.  ... 
doi:10.1016/s0271-5309(96)00025-0 fatcat:sxhzvfifebfe3egfg3syvvay2i

Pointing and Self-reference in French and French Sign Language

Aliyah Morgenstern, Stéphanie Caët, Fanny Limousin
2016 Open Linguistics  
AbstractThe aim of this paper is to conduct an exploratory study and compare the development of pointing and its specific use as self-reference in French sign language (LSF) with the development of pointing  ...  In LSF, the signs used for personal reference have the same form as pointing gestures, which are present in children's communication system from the age of 10-11 months (Bates et. al 1977, Clark 1978).  ...  Acknowledgement: The authors would like to thank their anonymous reviewers for their extremely constructive comments, as well as the editors.  ... 
doi:10.1515/opli-2016-0003 fatcat:fucvujbfpfea5ckp3656uchjfe

Multimodal estimation and communication of latent semantic knowledge for robust execution of robot instructions

Jacob Arkin, Daehyung Park, Subhro Roy, Matthew R Walter, Nicholas Roy, Thomas M Howard, Rohan Paul
2020 The international journal of robotics research  
We posit the use of non-exteroceptive modalities including physical proprioception, factual descriptions, and domain knowledge as mechanisms for inferring semantic properties of objects.  ...  Finally, we propose an efficient framework that anticipates possible linguistic interactions and infers the associated groundings for the current world state, thereby bootstrapping both language understanding  ...  We thank Michael Noseworthy for valuable feedback on this manuscript.  ... 
doi:10.1177/0278364920917755 fatcat:u2w3o7h4svea5gdryo6bfe5xae

Effects of live and video simulation on clinical reasoning performance and reflection

Timothy J. Cleary, Alexis Battista, Abigail Konopasky, Divya Ramani, Steven J. Durning, Anthony R. Artino
2020 Advances in Simulation  
Additionally, the current study points to the potential advantages of video self-reflection following live scenarios while also shedding some light on the debate regarding whether video-guided reflection  ...  In recent years, researchers have recognized the need to examine the relative effectiveness of different simulation approaches and the experiences of physicians operating within such environments.  ...  of Defense, Department of the Navy, or the Uniformed Services University.  ... 
doi:10.1186/s41077-020-00133-1 pmid:32760598 pmcid:PMC7393892 fatcat:acqyppjqb5csnj6rs32tu3jxgy

Human Behavior Understanding for Robotics [chapter]

Albert Ali Salah, Javier Ruiz-del-Solar, Çetin Meriçli, Pierre-Yves Oudeyer
2012 Lecture Notes in Computer Science  
This paper discusses the scientific, technological and application challenges that arise from the mutual interaction of robotics and computational human behavior understanding.  ...  Human behavior is complex, but structured along individual and social lines.  ...  human multi-modal behavior.  ... 
doi:10.1007/978-3-642-34014-7_1 fatcat:i2fvsrkp4vfzhkzivzzjk2gwp4

Exploration Behaviors, Body Representations, and Simulation Processes for the Development of Cognition in Artificial Agents

Guido Schillaci, Verena V. Hafner, Bruno Lara
2016 Frontiers in Robotics and AI  
In order to guide our movements, our brain must hold an internal model of our body and constantly monitor its configuration state.  ...  Although a clear answer has still not been found for this question, several studies suggest that processes of mental simulation of action-perception loops are likely to be executed in our brain and are  ...  The abovementioned Epigenetic Robotics Architecture Morse et al. (2010) addressed body representations for grounding linguistic labels onto body postures, visual, and auditory modalities.  ... 
doi:10.3389/frobt.2016.00039 fatcat:fd4dirm75feprjg3penleqrv5a

Reconciled with complexity in research on cognitive systems

Joanna Rączaszek-Leonardi
2016 Avant: Journal of Philosophical-Interdisciplinary Vanguard  
The causes of human behavior cannot be simple. Every move we make has a nested hierarchy of causes that affect its direction, timing and form.  ...  A brief overview of methods that are suitable for dealing with such interaction-dominant complex systems is presented and used as a background for describing a specific research program with the aim of  ...  For example, we analyzed gaze-at-face behaviors of mothers and infants.  ... 
doi:10.26913/70202016.0112.0007 fatcat:vrallrdwgfewziq2oyi7cploou

Conversational Recommendation: A Grand AI Challenge [article]

Dietmar Jannach, Li Chen
2022 arXiv   pre-print
., to ask them for the weather forecast. However, when asked for recommendations, e.g., for a restaurant to go to, the limitations of such devices quickly become obvious.  ...  one of the next grand challenges of AI.  ...  of conversational recommender systems over time System Interaction Description Modality Wasabi Personal Shopper Table 2 : 2 Catalog of user intents Intent Description Example Ask for Recommendation  ... 
arXiv:2203.09126v1 fatcat:xyev5lm5ujdi7byuzuaaa2vj5q
« Previous Showing results 1 — 15 out of 2,752 results