Different Genetic Algorithms and the Evolution of Specialization: A Study with Groups of Simulated Neural Robots
Tomassino Ferrauto, Domenico Parisi, Gabriele Di Stefano, Gianluca Baldassarre
2013
Artificial Life
Organisms that live in groups, from microbial symbionts to social insects and schooling fish, exhibit a number of highly efficient cooperative behaviours, often based on role taking and specialisation. These behaviours are relevant not only for the biologist but also for the engineer interested in decentralized collective robotics. We address these phenomena by carrying out experiments with groups of two simulated robots controlled by neural networks whose connection weights are evolved by
more »
... genetic algorithms. These algorithms and controllers are well suited to autonomously find solutions to decentralized collective robotic tasks based on principles of self-organization. The paper first presents a taxonomy of role taking and specialisation mechanisms related to evolved neuralnetwork controllers. Then it introduces two cooperation tasks which can be accomplished by either role taking or specialisation and uses these tasks to compare four different genetic algorithms to evaluate their capacity to evolve a suitable behavioural strategy which depends on the task demands. Interestingly, only one of the four algorithms, which appears to have more biological plausibility, is capable of evolving role taking or specialisation when they are needed. The results are relevant for both collective robotics and biology as they can provide useful hints on the different processes that can lead to the emergence of specialisation in robots and organisms. 2 Coooperative behaviours are an important topic of autonomous robotics which has received an increasing attention in the last two decades ([80, 43, 16, 41, 36, 7, 23, 51]; see [21, 26, 75] for some reviews and taxonomies of multi-robot systems and the tasks that can be tackled with them). This research concerns multi-robot systems that tackle tasks that cannot be solved by single robots [47, 89, 38, 73, 34, 11] , or can be solved more efficiently by multiple robots [36, 71] and it is often inspired by, or tries to capture, the mechanisms underlying the highly efficient behaviours of social insects [43, 16, 42, 44, 2] and other animals acting in groups [66, 18, 81, 70] . The coordination of these organisms is based on interesting principles of self-organisation [20, 27] which, if understood in depth and formalised [10, 13, 76], can often be translated into useful coordination principles [66, 43, 19] and robust collective robot controllers [36, 8, 24, 12]. This work focuses on multi-robot systems with distributed controllers [41, 36, 88] , that is, on robot groups which do not rely upon "leader robots" [80, 6] or centralized control mechanisms [89, 22, 71, 15] . Rather, they base coordination on peer-to-peer interactions and self-organizing principles. Multi-robot systems with distributed control are of particular interest for autonomous robotics because, compared to systems with centralized control, they are usually more robust with respect to failure of single robots, require less or no explicit communication [16, 19, 36, 64, 9] , and allow the use of robots with simpler sensors and actuators [17, 29, 24] . The controllers presented here are evolved through genetic algorithms [56, 81, 64, 24, 70, 74] . As suggested by the framework of Evolutionary Robotics [55], multi-robot systems can greatly benefit from being automatically designed with evolutionary algorithms as it is sometimes very difficult to directly design their controllers due to the indirect and complex causal chains that link the behaviour of the single robots to the behaviour of the whole group, which is the ultimate target of the design (see [14] for a comparison and exploitation of synergies between evolutionary techniques and direct-design approaches). In this regard, evolutionary techniques have the potential of developing controllers that exploit the selforganizing properties of multi-robot systems as they generate variants of controllers on the basis of random mutations and select a posteriori the best of such variants based on the quality of the behaviour exhibited by the robots in their interaction with the environment [54]. The robotic controllers used in this work are neural networks. The rationale of this choice is that,
doi:10.1162/artl_a_00106
pmid:23514239
fatcat:532g24pc4fdnjactqdk4hm3jhy