Filters








14,322 Hits in 5.1 sec

Learning Robust Representations via Multi-View Information Bottleneck [article]

Marco Federici, Anjan Dutta, Patrick Forré, Nate Kushman, Zeynep Akata
2020 arXiv   pre-print
The information bottleneck principle provides an information-theoretic method for representation learning, by training an encoder to retain all information which is relevant for predicting the label while  ...  approaches for representation learning.  ...  CONCLUSIONS AND FUTURE WORK In this work, we introduce Multi-View Information Bottleneck, a novel method for taking advantage of multiple data-views to produce robust representations for downstream tasks  ... 
arXiv:2002.07017v2 fatcat:h2y2a2ncrfavbatrqdc3d4h53m

Self-Supervised Information Bottleneck for Deep Multi-View Subspace Clustering [article]

Shiye Wang, Changsheng Li, Yanming Li, Ye Yuan, Guoren Wang
2022 arXiv   pre-print
Information Bottleneck based Multi-view Subspace Clustering (SIB-MSC).  ...  Inheriting the advantages from information bottleneck, SIB-MSC can learn a latent space for each view to capture common information among the latent representations of different views by removing superfluous  ...  Note that MIB is originally designed for multi-view representation learning via information bottleneck.  ... 
arXiv:2204.12496v2 fatcat:joota56wzzcnzd5fjhrzw7fjna

DRIBO: Robust Deep Reinforcement Learning via Multi-View Information Bottleneck [article]

Jiameng Fan, Wenchao Li
2021 arXiv   pre-print
To address this problem, we leverage the sequential nature of RL to learn robust representations that encode only task-relevant information from observations based on the unsupervised multi-view setting  ...  Specifically, we introduce an auxiliary objective based on the multi-view in-formation bottleneck (MIB) principle which quantifies the amount of task-irrelevant information and encourages learning representations  ...  Figure 1 : 1 Robust Deep Reinforcement Learning via Multi-View Infomration BOttleneck (DRIBO) incorporates the inherent sequential structure of RL and unsupervised multi-view settings into robust representation  ... 
arXiv:2102.13268v3 fatcat:qprtnfyelnajxdomuov4iuhvji

Layer-wise Learning of Stochastic Neural Networks with Information Bottleneck [article]

Thanh T. Nguyen, Jaesik Choi
2019 arXiv   pre-print
Information Bottleneck (IB) is a generalization of rate-distortion theory that naturally incorporates compression and relevance trade-offs for learning.  ...  In this work, we propose Information Multi-Bottlenecks (IMBs) as an extension of IB to multiple bottlenecks which has a direct application to training neural networks by considering layers as multiple  ...  In this work, we propose a unifying perspective to bridge between IB and neural networks via Information Multi-Bottlenecks (IMBs).  ... 
arXiv:1712.01272v5 fatcat:p4sezgntjjaklgphttenf2jwme

Farewell to Mutual Information: Variational Distillation for Cross-Modal Person Re-Identification [article]

Xudong Tian, Zhizhong Zhang, Shaohui Lin, Yanyun Qu, Yuan Xie, Lizhuang Ma
2021 arXiv   pre-print
Furthermore, by extending VSD to multi-view learning, we introduce two other strategies, Variational Cross-Distillation (VCD) and Variational Mutual-Learning (VML), which significantly improve the robustness  ...  The Information Bottleneck (IB) provides an information theoretic principle for representation learning, by retaining all information relevant for predicting label while minimizing the redundancy.  ...  Furthermore, by extending VSD to multi-view learning, we propose Variational Cross-Distillation (VCD) and Variational Mutual-Learning (VML), the strategies that improve the robustness of information bottleneck  ... 
arXiv:2104.02862v1 fatcat:dhct4i62l5gbfnopxfty7rtqoy

A Variational Information Bottleneck Approach to Multi-Omics Data Integration [article]

Changhee Lee, Mihaela van der Schaar
2021 arXiv   pre-print
To address such challenges, we propose a deep variational information bottleneck (IB) approach for incomplete multi-view observations.  ...  Most importantly, by modeling the joint representations as a product of marginal representations, we can efficiently learn from observed views with various view-missing patterns.  ...  Robustness to Missing Views Next, we evaluate how robust the multi-view learning methods are with respect to the view-missing rate.  ... 
arXiv:2102.03014v2 fatcat:x76j75hotjaylfprcibktc6lfm

Self-Supervised Graph Representation Learning via Information Bottleneck

Junhua Gu, Zichen Zheng, Wenmiao Zhou, Yajuan Zhang, Zhengjun Lu, Liang Yang
2022 Symmetry  
Therefore, the self-supervised graph information bottleneck (SGIB) proposed in this paper uses the symmetry and asymmetry of graphs to establish comparative learning and introduces the information bottleneck  ...  Graph representation learning has become a mainstream method for processing network structured data, and most graph representation learning methods rely heavily on labeling information for downstream tasks  ...  Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/sym14040657 fatcat:oosvggxjerfc3hvbtlreeibu7i

Unsupervised Learning of Visual 3D Keypoints for Control [article]

Boyuan Chen, Pieter Abbeel, Deepak Pathak
2021 arXiv   pre-print
The input images are embedded into latent 3D keypoints via a differentiable encoder which is trained to optimize both a multi-view consistency loss and downstream task objective.  ...  Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations.  ...  The consistency across views is learned via multi-view consistency loss that ensures the keypoint from different view maps to common world coordinate.  ... 
arXiv:2106.07643v1 fatcat:5vmjphexcbeydjixga245sr764

Learning Robust Representation through Graph Adversarial Contrastive Learning [article]

Jiayan Guo, Shangyang Li, Yue Zhao, Yan Zhang
2022 arXiv   pre-print
Based on the Information Bottleneck Principle, we theoretically prove that our method could obtain a much tighter bound, thus improving the robustness of graph representation learning.  ...  Thus, it is requisite to learn robust representations in graph neural networks.  ...  Information Bottleneck Principle for Graph Self-supervised Learning The Information Bottleneck (IB) [14, 15] provides an essential principle for representation learning from the perspective of information  ... 
arXiv:2201.13025v1 fatcat:lhi5svladzbrpjn77ofvd6rstm

Learning Inner-Group Relations on Point Clouds [article]

Haoxi Ran, Wei Zhuo, Jun Liu, Li Lu
2021 arXiv   pre-print
Finally, we conduct experiments to reveal the robustness of RPNet with regard to rigid transformation and noises.  ...  Related Work Learning on Point Clouds Multi-view methods [10, 15, 54, 16, 39] describe a 3D object with multiple views from different viewpoints.  ...  PointNet [38] learns from global information through pointwise multi-layer perceptrons and max-pooling operation.  ... 
arXiv:2108.12468v1 fatcat:6bg5wux4rnbsbkcty55camkt3u

Do Self-Supervised and Supervised Methods Learn Similar Visual Representations? [article]

Tom George Grigg, Dan Busbridge, Jason Ramapuram, Russ Webb
2021 arXiv   pre-print
We find that the methods learn similar intermediate representations through dissimilar means, and that the representations diverge rapidly in the final few layers.  ...  Despite the success of a number of recent techniques for visual self-supervised deep learning, there has been limited investigation into the representations that are ultimately learned.  ...  Or are self-supervised representations more robust in a multi-task/multi-distribution setting? We leave these questions for future work.  ... 
arXiv:2110.00528v3 fatcat:t3ldzhrvp5hjvdtro5ohogxg5e

Towards Unsupervised Crowd Counting via Regression-Detection Bi-knowledge Transfer

Yuting Liu, Zheng Wang, Miaojing Shi, Shin'ichi Satoh, Qijun Zhao, Hongyu Yang
2020 Proceedings of the 28th ACM International Conference on Multimedia  
In order to reinforce the information bottleneck, we introduce the multi-scale low-pass objective and multi-scale high-frequency communication for better frequency steering in the network.  ...  This information bottleneck makes a trade-off between the imagespecific structure and class-specific information in an image.  ...  The original image is low-pass filtered via DCT to create multi-scale ground truth for the decoder learning.  ... 
doi:10.1145/3394171.3413825 dblp:conf/mm/Liu0SSZY20 fatcat:e3kdjdrsmbeybo6izn6cqnt4se

Perturbation Robust Representations of Topological Persistence Diagrams [chapter]

Anirudh Som, Kowshik Thopalli, Karthikeyan Natesan Ramamurthy, Vinay Venkataraman, Ankita Shukla, Pavan Turaga
2018 Lecture Notes in Computer Science  
In this paper we present theoretically well-grounded approaches to develop novel perturbation robust topological representations, with the long-term view of making them amenable to fusion with contemporary  ...  However, persistence diagrams are multi-sets of points and hence it is not straightforward to fuse them with features used for contemporary machine learning tools like deep-nets.  ...  Sample frames across 5 views for 2 actions are shown in figure 4 . We consider only the silhouette information in the dataset for our PTS representations.  ... 
doi:10.1007/978-3-030-01234-2_38 fatcat:c5bouc5iqfczfp3r3yhqr7vxum

Perturbation Robust Representations of Topological Persistence Diagrams [article]

Anirudh Som, Kowshik Thopalli, Karthikeyan Natesan Ramamurthy, Vinay Venkataraman, Ankita Shukla, Pavan Turaga
2018 arXiv   pre-print
In this paper we present theoretically well-grounded approaches to develop novel perturbation robust topological representations, with the long-term view of making them amenable to fusion with contemporary  ...  However, persistence diagrams are multi-sets of points and hence it is not straightforward to fuse them with features used for contemporary machine learning tools like deep-nets.  ...  Sample frames across 5 views for 2 actions are shown in figure 4 . We consider only the silhouette information in the dataset for our PTS representations.  ... 
arXiv:1807.10400v1 fatcat:3vfztcnho5bf7aicwl2ye4gqju

Robust Hashing for Multi-View Data: Jointly Learning Low-Rank Kernelized Similarity Consensus and Hash Functions [article]

Lin Wu, Yang Wang
2016 arXiv   pre-print
In this paper, we motivate the problem of jointly and efficiently training the robust hash functions over data objects with multi-feature representations which may be noise corrupted.  ...  Learning hash functions/codes for similarity search over multi-view data is attracting increasing attention, where similar hash codes are assigned to the data objects characterizing consistently neighborhood  ...  Thus, the learned low-rank similarity matrix against multi-views can reflect the underlying clustering information.  ... 
arXiv:1611.05521v1 fatcat:vtnotaqms5dd5b4vklca4sevzu
« Previous Showing results 1 — 15 out of 14,322 results