Unsupervised learning with neural latent variable models
[article]
Charlie Nash, University Of Edinburgh, Chris Williams, Vittorio Ferrari
2021
Latent variable models assume the existence of unobserved factors that are responsible for generating observed data. Deep latent variable models that make use of neural components are effective at modelling and learning representations of data. In this thesis we present specialised deep latent variable models for a range of complex data domains, address challenges associated with the presence of missing-data, and develop tools for the analysis of the representations learned by neural networks.
more »
... irst we present the shape variational autoencoder (ShapeVAE), a deep latent variable model of part-structured 3D objects. Given an input collection of part-segmented objects with dense point correspondences the ShapeVAE is capable of synthesizing novel, realistic shapes, and by performing conditional inference can impute missing parts or surface normals. In addition, by generating both points and surface normals, our model enables us to use powerful surface-reconstruction methods for mesh synthesis. We provide a quantitative evaluation of the ShapeVAE on shape-completion and test-set log-likelihood tasks and demonstrate that the model performs favourably against strong baselines. We demonstrate qualitatively that the ShapeVAE produces plausible shape samples, and that it captures a semantically meaningful shape-embedding. In addition we show that the ShapeVAE facilitates mesh reconstruction by sampling consistent surface normals. Latent variable models can be used to probabilistically "fill-in" missing data entries. The variational autoencoder architecture (Kingma and Welling, 2014; Rezende et al., 2014) includes a "recognition" or "encoder" network that infers the latent variables given the data variables. However, it is not clear how to handle missing data variables in these networks. The factor analysis (FA) model is a basic autoencoder, using linear encoder and decoder networks. We show how to calculate exactly the latent posterior distribution for the FA model in the presence of missing data, and note that this soluti [...]
doi:10.7488/era/760
fatcat:dkbu2h2e3fhdtb3e7kwdlntfke