Filters








32,828 Hits in 5.6 sec

A Contrastive Objective for Learning Disentangled Representations [article]

Jonathan Kahana, Yedid Hoshen
2022 arXiv   pre-print
We present a new approach, proposing a new domain-wise contrastive objective for ensuring invariant representations.  ...  Here, our objective is to learn representations that are invariant to the domain (sensitive attribute) for which labels are provided, while being informative over all other image attributes, which are  ...  [28] proposed a seminal approach for contrastive learning of disentangled representations.  ... 
arXiv:2203.11284v1 fatcat:omgq22i5kndnpofsy6oh72d2mi

Unpaired Deep Image Dehazing Using Contrastive Disentanglement Learning [article]

Xiang Chen, Zhentao Fan, Zhuoran Zheng, Yufeng Li, Yufeng Huang, Longgang Dai, Caihua Kong, Pengpeng Li
2022 arXiv   pre-print
To achieve the disentanglement of these two-class factors in deep feature space, contrastive learning is introduced into a CycleGAN framework to learn disentangled representations by guiding the generated  ...  With such formulation, the proposed contrastive disentangled dehazing method (CDD-GAN) first develops negative generators to cooperate with the encoder network to update alternately, so as to produce a  ...  To achieve factor disentanglement, the recent contrastive representation learning may open a door for guiding the learning of an unambiguous embedding.  ... 
arXiv:2203.07677v1 fatcat:exohmuvihvbzrpaoettq6kwiju

Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View [article]

Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng
2022 arXiv   pre-print
To achieve this, we propose Disentaglement via Contrast (DisCo) as a framework to model the variations based on the target disentangled representations, and contrast the variations to jointly discover  ...  disentangled representation learning.  ...  This provide a novel contrastive learning view for disentangled representation learning and inspires us to propose a framework: Disentanglement via Contrast (DisCo) for disentangled representation learning  ... 
arXiv:2102.10543v2 fatcat:o5733u4egfhsjjocoj2gerayw4

Unsupervised Part-Based Disentangling of Object Shape and Appearance [article]

Dominik Lorenz, Leonard Bereska, Timo Milbich, Björn Ommer
2019 arXiv   pre-print
We present an unsupervised approach for disentangling appearance and shape by learning parts consistently over all instances of a category.  ...  Our model for learning an object representation is trained by simultaneously exploiting invariance and equivariance constraints between synthetically transformed images.  ...  Related Work Disentangling shape and appearance. Factorizing an object representation into shape and appearance is a popular ansatz for representation learning.  ... 
arXiv:1903.06946v3 fatcat:bppuidyf7ngajlphwc4aikhnjy

Modality Matches Modality: Pretraining Modality-Disentangled Item Representations for Recommendation

Tengyue Han, Pengfei Wang, Shaozhang Niu, Chenliang Li
2022 Proceedings of the ACM Web Conference 2022  
After this, a contrastive learning is further designed to guarantee the consistence and gaps between modality-disentangled representations.  ...  To this end, we propose a pretraining framework PAMD, which stands for PretrAining Modality-Disentangled Representations Model.  ...  Disentangled Encoder vs. Contrastive Learning. PAMD utilizes two components to pretrain modality representations: a disentangled encoder and a contrastive learning.  ... 
doi:10.1145/3485447.3512079 fatcat:aburalirkvam3mcyog3cidyoqm

Learning View-Disentangled Human Pose Representation by Contrastive Cross-View Mutual Information Maximization [article]

Long Zhao, Yuxiao Wang, Jiaping Zhao, Liangzhe Yuan, Jennifer J. Sun, Florian Schroff, Hartwig Adam, Xi Peng, Dimitris Metaxas, Ting Liu
2021 arXiv   pre-print
We introduce a novel representation learning method to disentangle pose-dependent as well as view-dependent factors from 2D human poses.  ...  We further propose two regularization terms to ensure disentanglement and smoothness of the learned representations. The resulting pose representations can be used for cross-view action recognition.  ...  Here, we explore view-disentanglement for pose-based action recognition. Some studies focus on learning view-invariant representations for human poses [48] and objects [18] .  ... 
arXiv:2012.01405v2 fatcat:dsyi43zfqjcvzoi3qthnsndo4e

Self-Supervised Disentangled Representation Learning for Third-Person Imitation Learning [article]

Jinghuan Shang, Michael S. Ryoo
2021 arXiv   pre-print
We use a dual auto-encoder structure plus representation permutation loss and time-contrastive loss to ensure the state and viewpoint representations are well disentangled.  ...  To enable better state learning for TPIL, we propose our disentangled representation learning method.  ...  ACKNOWLEDGMENT We thank the reviewers for their comments that greatly improved the manuscript.  ... 
arXiv:2108.01069v1 fatcat:w4kswzgqiffa3ifwn34src4s24

Facial Expression Recognition Using Disentangled Adversarial Learning [article]

Kamran Ali, Charles E. Hughes
2019 arXiv   pre-print
In this paper, we propose a novel Disentangled Expression learning-Generative Adversarial Network (DE-GAN) to explicitly disentangle facial expression representation from identity information.  ...  The disentangled facial expression representation is then used for facial expression recognition employing simple classifiers like SVM or MLP.  ...  Disentangled Expression Representation Given a face expression image x with expression label as y e and identity label as y id , our main objective is to learn a discriminative expression representation  ... 
arXiv:1909.13135v1 fatcat:d5efjpii7zco3n2ssz3mslsala

Weakly Supervised Disentangled Representation for Goal-conditioned Reinforcement Learning

Zhifeng Qian, You Mingyu, Zhou Hongjun, Bin He
2022 IEEE Robotics and Automation Letters  
In the paper, we propose a skill learning framework DR-GRL that aims to improve the sample efficiency and policy generalization by combining the Disentangled Representation learning and Goal-conditioned  ...  In a weakly supervised manner, we propose a Spatial Transform AutoEncoder (STAE) to learn an interpretable and controllable representation in which different parts correspond to different object attributes  ...  In contrast, our DR-GRL learns the representations which can disentangle different attributes of the objects in a weakly supervised manner. C.  ... 
doi:10.1109/lra.2022.3141148 fatcat:sprl5ju4x5ftbmq76cp3c3cai4

Disentangled Implicit Shape and Pose Learning for Scalable 6D Pose Estimation [article]

Yilin Wen, Xiangyu Li, Hao Pan, Lei Yang, Zheng Wang, Taku Komura, Wenping Wang
2021 arXiv   pre-print
In this paper, we present a novel approach for scalable 6D pose estimation, by self-supervised learning on synthetic data of multiple objects using a single autoencoder.  ...  To encourage shape space construction, we apply contrastive metric learning and enable the processing of unseen objects by referring to similar training objects.  ...  Disentangled representation learning Disentanglement is identified as a key objective for learning representations that are interpretable and generalized [1, 28] .  ... 
arXiv:2107.12549v1 fatcat:pv2yw4zai5gjfap3yxotrrbv5m

Disentangled Speech Embeddings Using Cross-Modal Self-Supervision

Arsha Nagrani, Joon Son Chung, Samuel Albanie, Andrew Zisserman
2020 ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
We construct a two-stream architecture which: (1) shares low-level features common to both representations; and (2) provides a natural mechanism for explicitly disentangling these factors, offering the  ...  The objective of this paper is to learn representations of speaker identity without access to manually annotated data.  ...  Arsha is funded by a Google PhD Fellowship.  ... 
doi:10.1109/icassp40776.2020.9054057 dblp:conf/icassp/NagraniCAZ20 fatcat:lwlcxavsorabxezb4f5ihos3de

Learning Disentangled Representation Implicitly via Transformer for Occluded Person Re-Identification [article]

Mengxi Jia, Xinhua Cheng, Shijian Lu, Jian Zhang
2021 arXiv   pre-print
We design DRL-Net, a disentangled representation learning network that handles occluded re-ID without requiring strict person image alignment or any additional supervision.  ...  To better eliminate interference from occlusions, we design a contrast feature learning technique (CFL) for better separation of occlusion features and discriminative ID features.  ...  In addition, we design a contrast feature learning module and a data augmentation strategy for better isolating ID-irrelevant features from global representation and suppressing occlusion interference.  ... 
arXiv:2107.02380v1 fatcat:kfjfp6zchzeq7c53udq6wzfncu

Visual Concepts Tokenization [article]

Tao Yang, Yuwang Wang, Yan Lu, Nanning Zheng
2022 arXiv   pre-print
representation learning and scene decomposition.  ...  Extensive experiments on several popular datasets verify the effectiveness of VCT on the tasks of disentangled representation learning and scene decomposition.  ...  Different from previous disentangled representation learning methods, which typically only consider one main object, the visual concepts on scene decomposition tasks focus on the object level representation  ... 
arXiv:2205.10093v1 fatcat:qtm73j6w3zax5izbugh3fis6jm

Facial Expression Representation Learning by Synthesizing Expression Images [article]

Kamran Ali, Charles E. Hughes
2019 arXiv   pre-print
In this paper, we propose a novel Disentangled Expression learning-Generative Adversarial Network (DE-GAN) which combines the concept of disentangled representation learning with residue learning to explicitly  ...  Unlike previous works using only expression residual learning for facial expression recognition, our method learns the disentangled expression representation along with the expressive component recorded  ...  The main contributions of this paper are as follows: • We present a novel disentangled and discriminative facial expression representation learning technique for FER using adversarial learning combined  ... 
arXiv:1912.01456v1 fatcat:w3rukvp7ofdurjaiie4mrsoed4

Cluster-based Contrastive Disentangling for Generalized Zero-Shot Learning [article]

Yi Gao and Chenwei Tang and Jiancheng Lv
2022 arXiv   pre-print
In this paper, we propose a Cluster-based Contrastive Disentangling (CCD) method to improve GZSL by alleviating the semantic gap and domain shift problems.  ...  Moreover, we introduce contrastive learning on semantic-matched and class-unique variables to learn high intra-set and intra-class similarity, as well as inter-set and inter-class discriminability.  ...  Disentangled Representation Learning Disentangled representation learning means decomposing the feature representation into multiple factors that are independent of each other [2] , which is emulating  ... 
arXiv:2203.02648v1 fatcat:tlaz3bsmbfhfxfvwpcqom25r6i
« Previous Showing results 1 — 15 out of 32,828 results