Low-level global features for vision-based localizations

Sven Eberhardt, Christoph Zetzsche
2013 Deutsche Jahrestagung für Künstliche Intelligenz  
Vision-based self-localization is the ability to derive one's own location from visual input only without knowledge of a previous position or idiothetic information. It is often assumed that the visual mechanisms and invariance properties used for object recognition will also be helpful for localization. Here we show that this is neither logically reasonable nor empirically supported. We argue that the desirable invariance and generalization properties differ substantially between the two
more » ... Application of several biologically inspired algorithms to various test sets reveals that simple, globally pooled features outperform the complex vision models used for object recognition, if tested on localization. Such basic global image statistics should thus be considered as valuable priors for self-localization, both in vision research and robot applications.
dblp:conf/ki/EberhardtZ13 fatcat:5p32totfczflxb3zishbwgppty