A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Mix and match networks: cross-modal alignment for zero-pair image-to-image translation
[article]
2020
arXiv
pre-print
This paper addresses the problem of inferring unseen cross-modal image-to-image translations between multiple modalities. We assume that only some of the pairwise translations have been seen (i.e. trained) and infer the remaining unseen translations (where training pairs are not available). We propose mix and match networks, an approach where multiple encoders and decoders are aligned in such a way that the desired translation can be obtained by simply cascading the source encoder and the
arXiv:1903.04294v2
fatcat:m4cecpfwbnhwrlx5rsngblxhke