Control and interaction strategies for self-reconfigurable modular robots
Last but not least, I would like to thank my family for their support throughout my degree. I especially thank my mom, dad, and sister. My hard-working parents have sacrificed a lot for my sister and myself and provided unconditional love and care. I would not have made it this far without them. Lausanne, 21st of July 2014 Stéphane Bonardi iii Abstract Research on Self-Reconfigurable Modular Robots (SRMRs) has steadily increased during the past decade. Their ability to change shape dynamically
... shape dynamically to adapt autonomously to their environment combined with their inherent versatility and their robustness through redundancy make them potentially well suited for a large variety of tasks. For example, the SRMR Roombots from the Biorobotics Laboratory (EPFL, Switzerland) has been developed with the goal of creating assistive and adaptive furniture able to locomote and self-adapt in everyday life environments. This thesis contributes to the field of SRMR by designing algorithms and devising strategies that address three major problems in the domain: self-reconfiguration, locomotion, and user-interaction. Despite significant efforts conducted in the domain of self-reconfiguration (SR), the current approaches often rely on high level abstractions of the problem and perfect theoretical models of the active units, neglecting the issues of bending and connection misalignment, making the transfer of the considered method to the hardware platforms difficult, if not impossible. Moreover, the constructed structures can only be comprised of active modules (often of the same type) instead of both active and passive units (i.e. units that possess no actuation capability and that are only equipped with passive connectors compatible with the active units), which tends to reduce the range of shapes that can be built using SRMRs. Taking into account these limitations, we first propose incremental modifications of existing techniques to address the SR problem. We extend the state of the art by proposing a novel hierarchical approach that allows the integration of fully passive elements and that computes hardware friendly movements which take into account torque limitation. We explore different ways of characterizing and compensating some of the hardware imperfections such as the bending effects observed in many materials and the alignment error during the connection and disconnection phases. The ability of SRMRs to rapidly change their morphology make them a suitable tool to study locomotion learning for various topologies. Methods using gait tables have been widely used to manage predefined changes of topology but they cannot deal with unguided selfreconfiguration, where the final structure into which the set of robots reconfigures into is unknown beforehand. We propose a new algorithm that relies on the detection of bio-inspired patterns in the structure combined with the use of symmetries to create a reduced control network that allows a fast convergence towards a reasonably efficient gait in terms of internal collision and forward speed. The Central Pattern Generator (CPG) network used for the locomotion control offers additional robustness and smooth transition between gaits. We demonstrate that our approach significantly outperforms a fully open control network by a v factor of up to 10 in the first 30 iterations, making it particularly well suited for time critical tasks in unknown environments. With the steady integration of robots into everyday life environments, the question of the interaction strategies and modalities becomes a central one. SRMRs bring the additional challenges of an evolving morphology, both on-grid and off-grid, and a lack of anthropomorphic features. Classical interfaces often confine the user to use a fixed device such as a PC to design a desired shape or to control a group of robots. In order to allow non-expert users to exploit the full potential of SRMRs, we introduce more natural ways of interacting with a group of SRMRs by abstracting away the complexity of SR and locomotion learning through high level interaction strategies. We develop both a tablet-based interface in which the user can arrange virtual structures made of SRMR in an augmented reality representation of a room and a device-free interface based on the principle of embodied interaction in which the user is tracked by external depth sensors and use pointing gestures to control groups of robots. Additional feedbacks are given to the user via visual lighting of the grid setup and of the modules themselves.