Enactive artificial intelligence: Investigating the systemic organization of life and mind

Tom Froese, Tom Ziemke
2009 Artificial Intelligence  
The embodied and situated approach to artificial intelligence (AI) has matured and become a viable alternative to traditional computationalist approaches with respect to the practical goal of building artificial agents, which can behave in a robust and flexible manner under changing real-world conditions. Nevertheless, some concerns have recently been raised with regard to the sufficiency of current embodied AI for advancing our scientific understanding of intentional agency. While from an
more » ... eering or computer science perspective this limitation might not be relevant, it is of course highly relevant for AI researchers striving to build accurate models of natural cognition. We argue that the biological foundations of enactive cognitive science can provide the conceptual tools that are needed to diagnose more clearly the shortcomings of current embodied AI. In particular, taking an enactive perspective points to the need for AI to take seriously the organismic roots of autonomous agency and sense-making. We identify two necessary systemic requirements, namely constitutive autonomy and adaptivity, which lead us to introduce two design principles of enactive AI. It is argued that the development of such enactive AI poses a significant challenge to current methodologies. However, it also provides a promising way of eventually overcoming the current limitations of embodied AI, especially in terms of providing fuller models of natural embodied cognition. Finally, some practical implications and examples of the two design principles of enactive AI are also discussed. 467 'classical' problems such as those pointed out in Searle's [108] famous "Chinese Room Argument", the notorious "frame problem" (e.g. [84, 33] ), Harnard's [59] formulation of the "symbol grounding problem", or even the extensive Heideggerian criticisms developed by Dreyfus [42, 43, 45] . Although there are of course significant differences between these criticisms, what they all generally agree on is that purely computational systems, as traditionally conceived by these authors, cannot account for the property of intentional agency. And without this property there is no sense in saying that these systems know what they are doing; they do not have any understanding of their situation [63] . Thus, to put it slightly differently, all these arguments are variations on the problem of how it is possible to design an artificial system in such a manner that relevant features of the world actually show up as significant from the perspective of that system itself, rather than only in the perspective of the human designer or observer. Given that embodied AI systems typically have robotic bodies and, to a large extent, appear to interact meaningfully with the world through their sensors and motors, one might think that the above problems have either disappeared or at least become solvable. Indeed, it has been argued that some dynamical form of such embodied AI is all we need to explain how it is that systems can behave in ways that are adaptively sensitive to context-dependent relevance [139] . Nevertheless, there have been some warning signs that something crucial might still be amiss. In fact, for the researcher interested in the philosophy of AI and the above criticisms, this should not come as a surprise. While Harnard's [58] position is that of a robotic functionalism, and thus for him the robotic embodiment is a crucial part of the solution to the symbol grounding problem, this is not the case for Searle. Already Searle's [108] original formulation of the Chinese Room Argument was accompanied by what he called the "robot reply" -envisioning essentially what we call embodied AI today, i.e. computer programs controlling robots and thus interacting with the real world -but rejected that reply as not making any substantial difference to his argument. Let us shift attention though, from these 'classic' philosophical arguments to a quick overview of more recent discussions among practitioners of embodied AI, which will be elaborated in more detail in the following sections. Already a decade ago Brooks [22] made the remark that, in spite of all the progress that the field of embodied AI has made since its inception in the late 1980s, it is certainly the case that actual biological systems behave in a considerably more robust, flexible, and generally more life-like manner than any artificial system produced so far. On the basis of this 'failure' of embodied AI to properly imitate even insect-level intelligence, he suggests that perhaps we have all missed some general truth about living systems. Moreover, even though some progress has certainly been made since Brooks' rather skeptical appraisal, the general worry that some crucial feature is still lacking in our models of living systems nevertheless remains (e.g. [23] ). This general worry about the inadequacy of current embodied AI for advancing our scientific understanding of natural cognition has been expressed in a variety of ways in the recent literature. Di Paolo [36], for example, has argued that, even though today's embodied robots are in many respects a significant improvement over traditional approaches, an analysis of the organismic mode of being reveals that "something fundamental is still missing" to solve the problem of meaning in AI. Similarly, one of us [143] has raised the question whether robots really are embodied in the first place, and has elsewhere argued [141] that embodied approaches have provided AI with physical grounding (e.g. [20]), but nevertheless have not managed to fully resolve the grounding problem. Furthermore, Moreno and Etxeberria [90] provide biological considerations which make them skeptical as to whether existing methodologies are sufficient for creating artificial systems with natural agency. Indeed, concerns have even been raised, by ourselves and others, about whether current embodied AI systems can be properly characterized as autonomous in the sense that living beings are (e.g. [110, 146, 147, 52, 61] ). Finally, Heideggerian philosopher Dreyfus, whose early criticisms of AI (cf. above) have had a significant impact on the development of modern embodied AI (or "Heideggerian AI", as he calls it), has recently referred to these new approaches as a "failure" [46] . For example, he claims that embodied/Heideggerian AI still falls short of satisfactorily addressing the grounding problem because it cannot fully account for the constitution of a meaningful perspective for an agent. Part of the problem, we believe, is that while the embodied approach has mostly focused on establishing itself as a viable alternative to the traditional computationalist paradigm [2], relatively little effort has been made to make connections to theories outside the field of AI, such as theoretical biology or phenomenological philosophy, in order to address issues of natural autonomy and embodiment of living systems [144] . However, as the above brief overview of recent discussions indicates, it appears that awareness is slowly growing in the field of embodied AI that something essential might still be lacking in current models in order to fulfill its own ambitions to avoid, solve or overcome the problems traditionally associated with computationalist AI, 2 and thereby provide better models of natural cognition. We argue that it looks promising that an answer to the current problems might be gained by drawing some inspiration from recent developments in enactive cognitive science (e.g. [116] [117] [118] 120, 121, 113, 95] ). The enactive paradigm originally emerged as a part of embodied cognitive science in the early 1990s with the publication of the book The Embodied Mind [127] that has strongly influenced a large number of embodied cognition theorists (e.g. [26] ). More recent work in enactive cognitive science has more explicitly placed biological autonomy and lived subjectivity at the heart of enactive cognitive science (cf. [118, 41] ). Of particular interest in the current context is its incorporation of the organismic roots of autonomous agency and sense-making into its theoretical framework (e.g. [136, 38] ). 2 We will use the term 'computationalist AI' to broadly denote any kind of AI which subscribes to the main tenets of the Representationalist or Computational Theory of Mind (cf. [60]), especially the metaphors 'Cognition Is Computation' and 'Perception Is Representation' (e.g. mostly GOFAI and symbolic AI, but also much sub-symbolic AI and some embodied approaches). Foundations of embodied AI What is embodied AI? One helpful way to address this question is by means of a kind of field guide such as the one recently published in this journal by Anderson [1] . Another useful approach is to review the main design principles which are employed by the practitioners of embodied AI in order to engineer their autonomous robots. The latter is the approach adopted here because it will provide us with the background from which to propose some additional principles for the development of enactive AI later on in this paper (Section 4). Fortunately, there has already been some effort within the field of embodied AI to make their design principles more explicit (e.g. [99] ; see [103, 101, 100] for a more elaborate discussion). Here we will briefly recapitulate a recent overview of these principles by Pfeifer, Iida and Bongard [101]. It is worth emphasizing that Pfeifer's attempt at their explication has its beginnings in the early 1990s, and, more importantly, that they have been derived from over two decades of practical AI research since the 1980s [99] . The design principles are summarized in Table 1 . The design principles are divided into two subcategories, namely (i) the "design procedure principles", which are concerned with the general philosophy of the approach, and (ii) the "agent design principles", which deal more directly with the actual methodology of designing autonomous agents [102] . The first of the design procedure principles (P-1) makes it explicit that the use of the synthetic methodology by embodied AI should be primarily viewed as a scientific rather than as an engineering endeavor, while, of course, these two goals do not mutually exclude each other [100, 56] . It is therefore important to realize that we are mostly concerned with the explanatory power that is afforded by the various AI approaches reviewed in this paper. In other words, the main question we want to address is how we should build AI systems such that they can help us to better understand natural phenomena of life and mind. Of course, since living beings have many properties that are also desirable for artificial systems and which are still lacking in current implementations [12] , any advances in this respect are also of importance in terms of more practical considerations such as how to design more robust and flexible AI systems. Of course, it is certainly the case that the "understanding by building" principle has also been adopted by many practitioners within the traditional paradigm since the inception of AI in the 1950s, though it can be said that today's computationalist AI is generally more focused 3 Varela, Thompson and Rosch [127] in fact referred to Brooks' work on subsumption architectures and behavior-based robotics (e.g. [20, 21] ) as an "example of what we are calling enactive AI" (p. 212) and a "fully enactive approach to AI" (p. 212). Nowadays, however, many researchers would probably not refer to this work as "fully enactive", due to the lack of constitutive autonomy, adaptivity and other reasons discussed in this paper. 4 It might be worth noting that Pfeifer's principles here serve as representative for the principles and the state of the art of the embodied AI approach as formulated by one of the leading researchers (and his co-workers). Hence, the extensions required for enactive AI formulated in this paper should not be interpreted as criticisms of Pfeifer's principles (or other work) specifically, but rather as further developments of the general embodied approach to AI that they are taken to be representative for.
doi:10.1016/j.artint.2008.12.001 fatcat:rrp65otembbqfcnd6wwbcrjmmu