HyperNets and their application to learning spatial transformations [article]

Alexey Potapov, Oleg Shcherbakov, Innokentii Zhdanov, Sergey Rodionov, Nikolai Skorobogatko
2018 arXiv   pre-print
In this paper we propose a conceptual framework for higher-order artificial neural networks. The idea of higher-order networks arises naturally when a model is required to learn some group of transformations, every element of which is well-approximated by a traditional feedforward network. Thus the group as a whole can be represented as a hyper network. One of typical examples of such groups is spatial transformations. We show that the proposed framework, which we call HyperNets, is able to
more » ... with at least two basic spatial transformations of images: rotation and affine transformation. We show that HyperNets are able not only to generalize rotation and affine transformation, but also to compensate the rotation of images bringing them into canonical forms.
arXiv:1807.09226v1 fatcat:oiynkwmsejejlltwljptru72oy