Special Issue on Symbol Grounding

Silvia Coradeschi, Amy Loutfi, Britta Wrede
2013 Künstliche Intelligenz  
How can we achieve artificial systems that are capable of understanding human language? This question has been addressed by the field of artificial intelligence for decades and has undergone several paradigm shifts from rule-based approaches, proposing sets of symbolic rules to process language input to produce an intelligent behavior, to the insight that symbolic rules are not sufficient to deal with new situations, let alone new symbols, that have not been encountered before. Rather, it has
more » ... en proposed that the symbols of a language need to be grounded in perception in order to allow for generalisation to new situations. This new formulation of the problem has been termed "Symbol Grounding" and proposes that any system capable of understanding symbols needs to be embodied in the sense that it has to be able to perceive its environment and to produce actions within this environment that cause significant changes and that these percepts and actions need to be tied to symbols that allow to abstract and thus to transfer knowledge. Research thus focuses either on simulated embodied agents or robots that are situated in a physical environment. Given the complex nature of the challenge, a range of different strands of research has emerged from which we tried to capture the most relevant ones in this Special Issue. Focusing on the development of grounding capabilities in infants brings in new perspectives and has, for example, lead to the insight that while the learning system has to be extremely adaptive to the environment, the social environment itself also adapts to the learner. In the contribution by Paul Vogt & J. Douglas Mastin it is thus discussed which aspects of such interactions should be modelled in artificial systems with a special focus on data "from the wild". This social component is taken up in the question of social symbol grounding which focuses on the idea that through the communication of symbols within a community a much higher level of knowledge can be achieved by each individual. In this Special Issue we focus on several approaches that look in detail how such communication processes can be achieved on robots not only acting in real physical environments but also in interaction with humans in order to achieve shared representations. Peltason et al. focus on the question of grounding in the application of human-robot interaction and through detailed analysis show which mechanisms enable to achieve common ground-and which, if missing, may lead to failure. De Kruijff takes up on this question and argues that due to the inherent asymmetry between humans and robots mutual understanding can only be achieved by taking the differences explicitly into account in the logical representations and allow for differences by considering the notion of "judgment" rather than "truth", which is currently being used in logical representations and reasoning approaches based on propositions. An approach taken up in the contribution by Buschmeier & Kopp where it is argued that symbol grounding in dialogue is a joint construction process which is modeled with a Bayesian network capable of taking uncertainties into account. An approach that considers language acquisition in general artificial systems is presented by Michael Spranger. This paper examines the emergence of language and how conceptualization strategies can not only be represented but also how they influence the development of lexical systems and evolve over time. A number of papers consider symbol grounding for objects in a robotic context. In Heintz et al. an anchoring fram-
doi:10.1007/s13218-013-0250-7 fatcat:o2i355rjpjepblgqtxdoijogsq