The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement [article]

William Peebles, John Peebles, Jun-Yan Zhu, Alexei Efros, Antonio Torralba
2020 arXiv   pre-print
Existing disentanglement methods for deep generative models rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal. We introduce a model-agnostic, unbiased stochastic approximation of this term based on Hutchinson's estimator to compute it efficiently during training. Our method can be applied to a wide range of deep
more » ... enerators with just a few lines of code. We show that training with the Hessian Penalty often causes axis-aligned disentanglement to emerge in latent space when applied to ProGAN on several datasets. Additionally, we use our regularization term to identify interpretable directions in BigGAN's latent space in an unsupervised fashion. Finally, we provide empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces.
arXiv:2008.10599v1 fatcat:odaoj42xjzggrpstpzn4xr2n4a