Learning Transformation Invariant Representations with Weak Supervision

Benjamin Coors, Alexandru Condurache, Alfred Mertins, Andreas Geiger
2018 Proceedings of the 13th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications  
Deep convolutional neural networks are the current state-of-the-art solution to many computer vision tasks. However, their ability to handle large global and local image transformations is limited. Consequently, extensive data augmentation is often utilized to incorporate prior knowledge about desired invariances to geometric transformations such as rotations or scale changes. In this work, we combine data augmentation with an unsupervised loss which enforces similarity between the predictions
more » ... f augmented copies of an input sample. Our loss acts as an effective regularizer which facilitates the learning of transformation invariant representations. We investigate the effectiveness of the proposed similarity loss on rotated MNIST and the German Traffic Sign Recognition Benchmark (GTSRB) in the context of different classification models including ladder networks. Our experiments demonstrate improvements with respect to the standard data augmentation approach for supervised and semi-supervised learning tasks, in particular in the presence of little annotated data. In addition, we analyze the performance of the proposed approach with respect to its hyperparameters, including the strength of the regularization as well as the layer where representation similarity is enforced. • We present a detailed investigation on the weighting and placement of the loss. • We show improved performance in supervised and semi-supervised learning on rotated MNIST and GTSRB when little labeled data is available.
doi:10.5220/0006549000640072 dblp:conf/visapp/CoorsCMG18 fatcat:52zckkahn5esnnfsimqqbxpprm