Filters








601 Hits in 4.1 sec

Learning Generalized Transformation Equivariant Representations via Autoencoding Transformations [article]

Guo-Jun Qi, Liheng Zhang, Xiao Wang
2019 arXiv   pre-print
For this purpose, we present both deterministic AutoEncoding Transformations (AET) and probabilistic AutoEncoding Variational Transformations (AVT) models to learn visual representations from generic groups  ...  Transformation Equivariant Representations (TERs) aim to capture the intrinsic visual structures that equivary to various transformations by expanding the notion of translation equivariance underlying  ...  Unsupervised Learning of Transformation Equivariant Representations The focus of this paper is on the principle of autoencoding transformations and its application to learn the transformation equivariant  ... 
arXiv:1906.08628v3 fatcat:ramv5y4rrzed5i3qqbkfc362ye

Category-Learning with Context-Augmented Autoencoder [article]

Denis Kuzminykh, Laida Kushnareva, Timofey Grigoryev, Alexander Zatolokin
2020 arXiv   pre-print
We also notice that, though a naive data augmentation technique can be very useful for supervised learning problems, autoencoders typically fail to generalize transformations from data augmentations.  ...  We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network in terms of the hidden representation.  ...  By using properties of the Group Equivariant Convolutional Networks [4] , the autoencoder with specialized architecture can factor it's internal representation space by group of transformations used by  ... 
arXiv:2010.05007v1 fatcat:wq3jdw2hdnf5rhypq53cvwss2y

Sparse Unsupervised Capsules Generalize Better [article]

David Rawlinson, Abdelrahman Ahmed, Gideon Kowadlo
2018 arXiv   pre-print
We show that unsupervised training of latent capsule layers using only the reconstruction loss, without masking to select the correct output class, causes a loss of equivariances and other desirable capsule  ...  Unsupervised sparsening of latent capsule layer activity both restores these qualities and appears to generalize better than supervised masking, while potentially enabling deeper capsules networks.  ...  This surprising result is believed to be due to the capsules network being inherently able to describe affine-transformed digits via its equivariant representation.  ... 
arXiv:1804.06094v1 fatcat:w67dhg477rbazaf5oqcrpmctam

Affine Variational Autoencoders: An Efficient Approach for Improving Generalization and Robustness to Distribution Shift [article]

Rene Bidart, Alexander Wong
2019 arXiv   pre-print
In this study, we propose the Affine Variational Autoencoder (AVAE), a variant of Variational Autoencoder (VAE) designed to improve robustness by overcoming the inability of VAEs to generalize to distributional  ...  In addition, we introduce a training procedure to create an efficient model by learning a subset of the training distribution, and using the AVAE to improve generalization and robustness to distributional  ...  Generalization to Affine Transforms There have been many attempts to both learn representations that are robust to distributional shifts under a set of transformations as well as increasing interpretability  ... 
arXiv:1905.05300v1 fatcat:cfogazghcjfatcrbyapbfbx4we

Disentangling Autoencoders (DAE) [article]

Jaehoon Cha, Jeyan Thiyagalingam
2022 arXiv   pre-print
We believe that this model leads to a new field for disentanglement learning based on autoencoders without regularizers.  ...  The proposed model is compared to seven state-of-the-art generative models based on autoencoders and evaluated based on five supervised disentanglement metrics.  ...  In terms of transformation, assume that these transformations are represented by a group G of symmetries acting on W via an action • : G × W → W .  ... 
arXiv:2202.09926v2 fatcat:3dbmqrb52vfmnlxqww6odzeali

Self-Supervised Graph Representation Learning via Topology Transformations [article]

Xiang Gao, Wei Hu, Guo-Jun Qi
2021 arXiv   pre-print
We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data to enable the wide applicability of Graph  ...  Then, we self-train a representation encoder to learn node representations by reconstructing the topology transformations from the feature representations of the original and transformed graphs.  ...  as transformation equivariant representation learning.  ... 
arXiv:2105.11689v2 fatcat:g5knitch3vhm7j56la73wx6kaa

Learning with Capsules: A Survey [article]

Fabio De Sousa Ribeiro, Kevin Duarte, Miles Everett, Georgios Leontidis, Mubarak Shah
2022 arXiv   pre-print
context of representation learning.  ...  Capsule networks were proposed as an alternative approach to Convolutional Neural Networks (CNNs) for learning object-centric representations, which can be leveraged for improved generalization and sample  ...  Recently, a capsule autoencoder architecture has been proposed to explicitly learn robust motion representations [81] .  ... 
arXiv:2206.02664v1 fatcat:auiy6oo5tbfghkppfyxysjiyty

Geometric Deep Learning on Molecular Representations [article]

Kenneth Atz, Francesca Grisoni, Gisbert Schneider
2021 arXiv   pre-print
GDL bears particular promise in molecular modeling applications, in which various molecular representations with different symmetry properties and levels of abstraction exist.  ...  Emphasis is placed on the relevance of the learned molecular features and their complementarity to well-established molecular descriptors.  ...  Other deep learning approaches have relied on string-based representations for de novo design, e.g., conditional generative adversarial networks [160] [161] [162] and variational autoencoders [163,  ... 
arXiv:2107.12375v4 fatcat:sgxlqdxiavbinly4s3zthysxbq

E(n) Equivariant Graph Neural Networks [article]

Victor Garcia Satorras, Emiel Hoogeboom, Max Welling
2022 arXiv   pre-print
We demonstrate the effectiveness of our method on dynamical systems modelling, representation learning in graph autoencoders and predicting molecular properties.  ...  This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs).  ...  Method Graph Autoencoder A Graph Autoencoder can learn unsupervised representations of graphs in a continuous latent space (Kipf & Welling, 2016b; Simonovsky & Komodakis, 2018) .  ... 
arXiv:2102.09844v3 fatcat:fmes7k4fdrhjlfci7bcnq2pjha

Equivariant Representation Learning via Class-Pose Decomposition [article]

Giovanni Luca Marchetti, Gustaf Tegnér, Anastasiia Varava, Danica Kragic
2022 arXiv   pre-print
We introduce a general method for learning representations that are equivariant to symmetries of data.  ...  Results show that our representations capture the geometry of data and outperform other equivariant representation learning frameworks.  ...  In this work we introduce a general framework for equivariant representation learning.  ... 
arXiv:2207.03116v2 fatcat:xgnngniayfgllowroco5hdj4aq

Small Data Challenges in Big Data Era: A Survey of Recent Progress on Unsupervised and Semi-Supervised Methods [article]

Guo-Jun Qi, Jiebo Luo
2021 arXiv   pre-print
We will review the principles of learning the transformation equivariant, disentangled, self-supervised and semi-supervised representations, all of which underpin the foundation of recent progresses.  ...  We will also provide a broader outlook of future directions to unify transformation and instance equivariances for representation learning, connect unsupervised and semi-supervised augmentations, and explore  ...  It learns a special case of transformation-equivariance representations as the learned representation ought to encode the information about them by equivarying with the applied rotations.  ... 
arXiv:1903.11260v2 fatcat:hjya3ojzmfh7nnldhqkdx6o37a

Properties from Mechanisms: An Equivariance Perspective on Identifiable Representation Learning [article]

Kartik Ahuja, Jason Hartford, Yoshua Bengio
2021 arXiv   pre-print
A key goal of unsupervised representation learning is "inverting" a data generating process to recover its latent properties.  ...  We demonstrate the power of this mechanism-based perspective by showing that we can leverage our results to generalize existing identifiable representation learning results.  ...  Introduction Modern unsupervised representation learning techniques can generate images of our world with intricate detail (e.g.  ... 
arXiv:2110.15796v1 fatcat:ebm5ntlulvherngfrufwqyy7z4

Self-supervised Wide Baseline Visual Servoing via 3D Equivariance [article]

Jinwook Huh, Jungseok Hong, Suveer Garg, Hyun Soo Park, Volkan Isler
2022 arXiv   pre-print
We learn a coherent visual representation by leveraging a geometric property called 3D equivariance-the representation is transformed in a predictable way as a function of 3D transformation.  ...  With the learned model, the relative transformation can be inferred simply by following the gradient in the learned space and used as feedback for closed-loop visual servoing.  ...  With 3D equivariance, the visual representation and its transformation are jointly learned via a Siamese network made of a feature extractor and feature transformer.  ... 
arXiv:2209.05432v1 fatcat:invlqkinajbt3ntxvhpqz3tm5a

3DLinker: An E(3) Equivariant Variational Autoencoder for Molecular Linker Design [article]

Yinan Huang, Xingang Peng, Jianzhu Ma, Muhan Zhang
2022 arXiv   pre-print
graph variational autoencoder.  ...  To address these problems, we propose a conditional generative model, named 3DLinker, which is able to predict anchor atoms and jointly generate linker graphs and their 3D structures based on an E(3) equivariant  ...  Finally, since the generative model is based on the variational autoencoder (VAE) framework (Kingma & Welling, 2013) , it can be used as an unsupervised representation learning method whose latent representations  ... 
arXiv:2205.07309v1 fatcat:ofrkitukljf3tng5q4nlhcaxy4

ManifoldNet: A Deep Network Framework for Manifold-valued Data [article]

Rudrasis Chakraborty, Jose Bouza, Jonathan Manton, Baba C. Vemuri
2018 arXiv   pre-print
weights makeup the convolution mask, to be learned.  ...  Deep neural networks have become the main work horse for many tasks involving learning from data in a variety of applications in Science and Engineering.  ...  In the deep learning community this has become a field in its own right, known as representation learning or feature learning [3] .  ... 
arXiv:1809.06211v3 fatcat:fubxpsktp5c3pb7fgeqif6bhmm
« Previous Showing results 1 — 15 out of 601 results