VISION-BASED HUMANOID NAVIGATION USING SELF-SUPERVISED OBSTACLE DETECTION

DANIEL MAIER, CYRILL STACHNISS, MAREN BENNEWITZ
2013 International Journal of Humanoid Robotics  
In this article, we present an efficient approach to obstacle detection for humanoid robots based on monocular images and sparse laser data. We particularly consider collision-free navigation with the Nao humanoid, which is the most popular small-size robot nowadays. Our approach first analyzes the scene around the robot by acquiring data from a laser range finder installed in the head. Then, it uses the knowledge about obstacles identified in the laser data to train visual classifiers based on
more » ... color and texture information in a self-supervised way. While the robot is walking, it applies the learned classifiers to the camera images to decide which areas are traversable. As we show in the experiments, our technique allows for safe and efficient humanoid navigation in real-world environments, even in the case of robots equipped with low-end hardware such as the Nao, which has not been achieved before. Furthermore, we illustrate that our system is generally applicable and can also support the traversability estimation using other combinations of camera and depth data, e.g., from a Kinect-like sensor. of the individual sensors on the robot, the area in front of the robot's feet may not be observable while walking. That raises the question of whether the robot can Int. Journal of Humanoid Robotics (2013) Vol. 10 (2) Author's preprint safely continue walking without colliding with unanticipated objects. This is crucial as collisions easily lead to a fall. In this paper, we address these three challenges by developing an effective approach to obstacle detection that combines monocular images and sparse laser data. As we show in the experiments, this enables the robot to navigate more efficiently through the environment. Since its release, Aldebaran's Nao robot quickly became the most common humanoid robot platform. However, this robot is particularly affected by the aforementioned limitations and problems, due to its small-size and the installed low-end hardware. This might be a reason why, up to today, there is no general obstacle detection system available that allows reliable, collision-free motion for that type of robot outside the restricted domain of robot soccer. Accordingly, useful applications that can be realized with the Nao system are limited. In this work, we tackled this problem and developed an obstacle detection system that relies solely on the robot's onboard sensors. Our approach is designed to work on a standard Nao robot with the optional laser head (see left image of Fig. 1) , without need for further modifications. Our system is, however, not limited to the Nao platform but can be used on every robot platform that provides camera and range data. To detect obstacles, our approach interprets sparse 3D laser data obtained from the Hokuyo laser range finder installed in the robot's head. Given this installation of the laser device, obstacles close to the robot's feet cannot be observed since they lie out of the field of view while walking. Hence, the robot needs to stop occasionally and adjust its body pose before performing a 3D laser sweep by tilting its head for obtaining distance information to nearby objects. This procedure robustly detects obstacles from the proximity data but is time-consuming and thus leads to inefficient navigation. To overcome this problem, we present a technique to train visual obstacle detectors from sparse laser data in order to interpret images from the monocular camera installed in the robot's head. Our approach projects detected objects in the range scans into the camera image and learns classifiers that consider color and texture Int. Journal of Humanoid Robotics (2013) Vol. 10 (2) Author's preprint c In theory, a node could be created for each pixel. This, however, is computationally too demanding for our online application so that small images patches are used.
doi:10.1142/s0219843613500163 fatcat:symueubgwja3hfvo4papw4btke