A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
[article]
2022
arXiv
pre-print
From the intuitive notion of disentanglement, the image variations corresponding to different factors should be distinct from each other, and the disentangled representation should reflect those variations with separate dimensions. To discover the factors and learn disentangled representation, previous methods typically leverage an extra regularization term when learning to generate realistic images. However, the term usually results in a trade-off between disentanglement and generation
arXiv:2102.10543v2
fatcat:o5733u4egfhsjjocoj2gerayw4