Filters








17,619 Hits in 3.0 sec

A Learnable Self-supervised Task for Unsupervised Domain Adaptation on Point Clouds [article]

Xiaoyuan Luo, Shaolei Liu, Kexue Fu, Manning Wang, Zhijian Song
2021 arXiv   pre-print
In this paper, we propose a learnable self-supervised task and integrate it into a self-supervision-based point cloud UDA architecture.  ...  Deep neural networks have achieved promising performance in supervised point cloud applications, but manual annotation is extremely expensive and time-consuming in supervised learning schemes.  ...  The above studies show the effectiveness of integrating self-supervised task in point cloud UDA.  ... 
arXiv:2104.05164v1 fatcat:miepy2syvfaivgmtzqdapq6tki

Self-supervised Recommendation with Cross-channel Matching Representation and Hierarchical Contrastive Learning [article]

Dongjie Zhu, Yundong Sun, Haiwen Du, Zhaoshuo Tian
2021 arXiv   pre-print
Based on this, we also proposed a hierarchical self-supervised learning model, which realized two levels of self-supervised learning within and between channels and improved the ability of self-supervised  ...  This is the first attempt in the field of recommender systems, we believe the insight of this paper is inspirational to future self-supervised learning research based on multi-channel information.  ...  On this basis, it performs self-supervised learning through its proposed HMIN model in each view, which is regarded as an auxiliary task.  ... 
arXiv:2109.00676v3 fatcat:ateh67wf7bcbxkhsevmyvlwd2m

Self-Supervised Regional and Temporal Auxiliary Tasks for Facial Action Unit Recognition [article]

Jingwei Yan and Jingjing Wang and Qiang Li and Chunmao Wang and Shiliang Pu
2021 arXiv   pre-print
Based on these two self-supervised auxiliary tasks, local features, mutual relation and motion cues of AUs are better captured in the backbone network with the proposed regional and temporal based auxiliary  ...  Motivated by this, we take the AU properties into consideration and propose two auxiliary AU related tasks to bridge the gap between limited annotations and the model performance in a self-supervised manner  ...  In this paper, we will delve into the unlabeled data and learn discriminative feature representation for AU recognition from the aspect of self-supervised auxiliary task learning.  ... 
arXiv:2107.14399v1 fatcat:bb3f65jiujcnjae5eampgsk2wa

Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation [article]

Min-Hung Chen, Baopu Li, Yingze Bao, Ghassan AlRegib, Zsolt Kira
2020 arXiv   pre-print
To reduce the discrepancy, we propose Self-Supervised Temporal Domain Adaptation (SSTDA), which contains two self-supervised auxiliary tasks (binary and sequential domain prediction) to jointly align cross-domain  ...  One main challenge is the problem of spatiotemporal variations (e.g. different people may perform the same activity in various ways).  ...  Self-Supervised Learning has become popular in recent years for images and videos given the ability to learn informative feature representations without human supervision.  ... 
arXiv:2003.02824v3 fatcat:3lreivwgnbdlfipcmd755ryvea

Injecting Text in Self-Supervised Speech Pretraining [article]

Zhehuai Chen, Yu Zhang, Andrew Rosenberg, Bhuvana Ramabhadran, Gary Wang, Pedro Moreno
2021 arXiv   pre-print
The proposed method, tts4pretrain complements the power of contrastive learning in self-supervision with linguistic/lexical representations derived from synthesized speech, effectively learning from untranscribed  ...  Self-supervised pretraining for Automated Speech Recognition (ASR) has shown varied degrees of success.  ...  Unspoken text is complementary to un-transcribed speech in self-supervised learning. It is also much easier to collect than un-transcribed speech.  ... 
arXiv:2108.12226v1 fatcat:mc55fw4pt5febcfyksuvm46hcq

Hierarchical Self-supervised Augmented Knowledge Distillation [article]

Chuanguang Yang, Zhulin An, Linhang Cai, Yongjun Xu
2021 arXiv   pre-print
We therefore adopt an alternative self-supervised augmented task to guide the network to learn the joint distribution of the original recognition task and self-supervised auxiliary task.  ...  Although recent self-supervised contrastive knowledge achieves the best performance, forcing the network to learn such knowledge may damage the representation learning of the original class recognition  ...  We guide all auxiliary classifiers attached to the original network to learn informative self-supervised augmented distributions.  ... 
arXiv:2107.13715v1 fatcat:vd7d4kkzcnhuhiyey5eicac7eq

A Brief Summary of Interactions Between Meta-Learning and Self-Supervised Learning [article]

Huimin Peng
2021 arXiv   pre-print
We show that an integration of meta-learning and self-supervised learning models can best contribute to the improvement of model generalization capability.  ...  In self-supervised learning, data augmentation techniques are widely applied and data labels are not required since pseudo labels can be estimated from trained models on similar tasks.  ...  I am grateful for valuable publications from Jacques Pitrat, which describe his research work on artificial general intelligence in detail.  ... 
arXiv:2103.00845v2 fatcat:soq6tfl56vgshebtnot57e4qwe

Learning to Generalize One Sample at a Time with Self-Supervision [article]

Antonio D'Innocente, Silvia Bucci, Barbara Caputo, Tatiana Tommasi
2019 arXiv   pre-print
In this paper we argue that the data annotation overload should be minimal, as it is costly. Hence, we propose to use self-supervised learning to achieve domain generalization and adaptation.  ...  We consider learning regularities from non annotated data as an auxiliary task, and cast the problem within an Auxiliary Learning principled framework.  ...  In our approach we propose to exploit self-supervision as an auxiliary task together with the primary supervised task.  ... 
arXiv:1910.03915v3 fatcat:nvuurvhzqfddze4obbpdc6jtj4

Relevance learning in generative topographic mapping

Andrej Gisbrecht, Barbara Hammer
2011 Neurocomputing  
The framework of relevance learning or learning metrics as introduced in [4, 6] offers an elegant way to shape the metric according to auxiliary information at hand such that only those aspects are displayed  ...  Here we introduce the concept of relevance learning into GTM such that the metric is shaped according to auxiliary class labels.  ...  Thereby, auxiliary information such as class labels are integrated and only those aspects of the data are displayed which carry information for the given auxiliary data at hand.  ... 
doi:10.1016/j.neucom.2010.12.015 fatcat:4vrfg2o2ebdn7bi7uvre2sikte

Learning from Extrinsic and Intrinsic Supervisions for Domain Generalization [article]

Shujun Wang, Lequan Yu, Caizi Li, Chi-Wing Fu, Pheng-Ann Heng
2020 arXiv   pre-print
Besides conducting the common supervised recognition task, we seamlessly integrate a momentum metric learning task and a self-supervised auxiliary task to collectively utilize the extrinsic supervision  ...  To this end, we present a new domain generalization framework that learns how to generalize across domains simultaneously from extrinsic relationship supervision and intrinsic self-supervision for images  ...  The work described in this paper was supported in parts by the  ... 
arXiv:2007.09316v1 fatcat:qqhd6onu55hqdlliczuns3kcpm

Domain Adaptive Semantic Segmentation with Self-Supervised Depth Estimation [article]

Qin Wang, Dengxin Dai, Lukas Hoyer, Luc Van Gool, Olga Fink
2021 arXiv   pre-print
However, such a supervision is not always available. In this work, we leverage the guidance from self-supervised depth estimation, which is available on both domains, to bridge the domain gap.  ...  Leveraging the supervision from auxiliary tasks (such as depth estimation) has the potential to heal this shift because many visual tasks are closely related to each other.  ...  Self-supervised learning Our work is also related to self-supervised learning in a broad sense.  ... 
arXiv:2104.13613v2 fatcat:6velaiarczhojkpyizv7app7le

Multi-Pretext Attention Network for Few-shot Learning with Self-supervision [article]

Hainan Li, Renshuai Tao, Jun Li, Haotong Qin, Yifu Ding, Shuo Wang, Xianglong Liu
2021 arXiv   pre-print
Existing studies rarely exploit auxiliary information from large amount of unlabeled data. Self-supervised learning is emerged as an efficient method to utilize unlabeled data.  ...  In this work, we propose a Graph-driven Clustering (GC), a novel augmentation-free method for self-supervised learning, which does not rely on any auxiliary sample and utilizes the endogenous correlation  ...  Second, we remove auxiliary models of self-supervised learning in the top path in Fig. 1 but remain the few-shot learning classifiers in the bottom path.  ... 
arXiv:2103.05985v1 fatcat:wgadzl75gzeobcdyvqypyzxt2e

Self-Supervised Multi-Channel Hypergraph Convolutional Network for Social Recommendation [article]

Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet Hung, Xiangliang Zhang
2021 arXiv   pre-print
To compensate for the aggregating loss, we innovatively integrate self-supervised learning into the training of the hypergraph convolutional network to regain the connectivity information with hierarchical  ...  experimental results on multiple real-world datasets show that the proposed model outperforms the SOTA methods, and the ablation study verifies the effectiveness of the multi-channel setting and the self-supervised  ...  To address this issue and fully inherit the rich information in the hypergraphs, we innovatively integrate self-supervised learning into the training of MHCN.  ... 
arXiv:2101.06448v3 fatcat:qvrkivpzyrentl2vp2dykw4acu

Improving Semantic Analysis on Point Clouds via Auxiliary Supervision of Local Geometric Priors [article]

Lulu Tang, Ke Chen, Chaozheng Wu, Yu Hong, Kui Jia, Zhixin Yang
2020 arXiv   pre-print
via physical computation from point clouds themselves as self-supervision signals or provided as privileged information.  ...  Owing to explicitly encoding local shape manifolds in favor of semantic analysis, the proposed geometric self-supervised and privileged learning algorithms can achieve superior performance to their backbone  ...  Alternatively, geometric properties can be served as auxiliary self-supervision signals, inspired by the recent success of self-supervised learning in visual recognition [14] , [15] , [16] , [17] ,  ... 
arXiv:2001.04803v2 fatcat:xjytkcufvzftpgtfzfi7uu7kre

Improving Few-Shot Learning using Composite Rotation based Auxiliary Task [article]

Pratik Mazumder, Pravendra Singh, Vinay P. Namboodiri
2020 arXiv   pre-print
Our approach aims to train networks to produce such features by using a self-supervised auxiliary task.  ...  In this paper, we propose an approach to improve few-shot classification performance using a composite rotation based auxiliary task.  ...  Self-supervised learning trains networks by making use of the structural information contained in the input images.  ... 
arXiv:2006.15919v2 fatcat:bbiqp6maznbk7bvobj7b5s3ukm
« Previous Showing results 1 — 15 out of 17,619 results