Filters








146,196 Hits in 6.5 sec

One Objective for All Models – Self-supervised Learning for Topic Models [article]

Zeping Luo, Cindy Weng, Shiyou Wu, Mo Zhou, Rong Ge
2022 arXiv   pre-print
In particular, we prove that commonly used self-supervised objectives based on reconstruction or contrastive samples can both recover useful posterior information for general topic models.  ...  In this paper, we highlight a key advantage of self-supervised learning -- when applied to data generated by topic models, self-supervised learning can be oblivious to the specific model, and hence is  ...  If one self-supervised objective can capture all models, then it would be able to extract useful information.  ... 
arXiv:2203.03539v1 fatcat:g6kxid254vc45lwmnd3np7dpyy

Self-Supervised Learning of Visual Features through Embedding Images into Text Topic Spaces

Lluis Gomez, Yash Patel, Marcal Rusinol, Dimosthenis Karatzas, C. V. Jawahar
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique.  ...  We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multimodal (text and image) documents.  ...  Acknowledgment We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.  ... 
doi:10.1109/cvpr.2017.218 dblp:conf/cvpr/Gomez-BigordaPR17 fatcat:paymsbngcbfblfp5ehxgnk3gpm

Self-supervised learning of visual features through embedding images into text topic spaces [article]

Lluis Gomez, Yash Patel, Marçal Rusiñol, Dimosthenis Karatzas, C.V. Jawahar
2017 arXiv   pre-print
For this we leverage the hidden semantic structures discovered in the text corpus with a well-known topic modeling technique.  ...  We put forward the idea of performing self-supervised learning of visual features by mining a large scale corpus of multi-modal (text and image) documents.  ...  Acknowledgment We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.  ... 
arXiv:1705.08631v1 fatcat:c7pu7heiobcexhftqemuuye6pi

TextTopicNet - Self-Supervised Learning of Visual Features Through Embedding Images on Semantic Text Spaces [article]

Yash Patel, Lluis Gomez, Raul Gomez, Marçal Rusiñol, Dimosthenis Karatzas, C.V. Jawahar
2018 arXiv   pre-print
More specifically we use popular text embedding techniques to provide the self-supervision for the training of deep CNN.  ...  Our experiments demonstrate state-of-the-art performance in image classification, object detection, and multi-modal retrieval compared to recent self-supervised or naturally-supervised approaches.  ...  We gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan X Pascal GPU used for this research.  ... 
arXiv:1807.02110v1 fatcat:3qe3xgsuzfem5j5doiak5bexeq

Self-Supervised Representation Learning on Document Images [article]

Adrian Cosma, Mihai Ghidoveanu, Michael Panaitescu-Liess, Marius Popescu
2020 arXiv   pre-print
We also propose a novel method for self-supervision, which makes use of the inherent multi-modality of documents (image and text), which performs better than other popular self-supervised methods, including  ...  This work analyses the impact of self-supervised pre-training on document images in the context of document image classification.  ...  Acknowledgements We want to express our appreciation for everyone involved at Sparktech Software, for fruitful discussions, and much-needed suggestions.  ... 
arXiv:2004.10605v2 fatcat:nt6stpirendxnpqp3dasjbniay

Self-Supervised Visual Representations for Cross-Modal Retrieval [article]

Yash Patel, Lluis Gomez, Marçal Rusiñol, Dimosthenis Karatzas, C.V. Jawahar
2019 arXiv   pre-print
the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.  ...  In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles.  ...  Self-Supervised Features for Image Classification 5.1.1 PASCAL VOC. Self-supervised learned features are tested for image classification on PASCAL VOC 2007 [9] dataset.  ... 
arXiv:1902.00378v1 fatcat:ksha6zs7u5a53cclm2xjrozu7i

Towards Utilizing Unlabeled Data in Federated Learning: A Survey and Prospective [article]

Yilun Jin, Xiguang Wei, Yang Liu, Qiang Yang
2020 arXiv   pre-print
Federated Learning (FL) proposed in recent years has received significant attention from researchers in that it can bring separate data sources together and build machine learning models in a collaborative  ...  However, to the best of our knowledge, few existing works aim to utilize unlabeled data to enhance federated learning, which leaves a potentially promising research topic.  ...  There are also works tackling federated self-supervised feature learning on texts [Jiang et al., 2019; by learning topic models and language models.  ... 
arXiv:2002.11545v2 fatcat:tjmj3cowdzes3j5f2uhpokzgqm

Domain-agnostic Document Representation Learning Using Latent Topics and Metadata

Natraj Raman, Armineh Nourbakhsh, Sameena Shah, Manuela Veloso
2021 Proceedings of the ... International Florida Artificial Intelligence Research Society Conference  
Instead of traditional auto-regressive or auto-encoding based training, our novel self-supervised approach learns a soft-partition of the input space when generating text embeddings by employing a pre-learned  ...  topic model distribution as surrogate labels.  ...  All rights reserved.  ... 
doi:10.32473/flairs.v34i1.128388 fatcat:ipuvs2rys5g7rc4hwv6y6bfwde

Robust Document Representations using Latent Topics and Metadata [article]

Natraj Raman, Armineh Nourbakhsh, Sameena Shah, Manuela Veloso
2020 arXiv   pre-print
Specifically, we employ a pre-learned topic model distribution as surrogate labels and construct a loss function based on KL divergence.  ...  Instead of traditional auto-regressive or auto-encoding based training, our novel self-supervised approach learns a soft-partition of the input space when generating text embeddings.  ...  Self-supervision during training is based on latent topic distribution and (optional) reconstruction for metadata.  ... 
arXiv:2010.12681v1 fatcat:hi7thmsswvcmth62lexzf5xdz4

Implementation of supplemental E-learning models for blended learning in pharmacology

Raakhi Tripathi, Dnyaneshwar Kurle, Sharmila Jalgaonkar, Pankaj Sarkate, Nirmala Rege
2017 National Journal of Physiology, Pharmacy and Pharmacology  
pretest then followed by online post-test and (3) Replacement model: Supervised pretest on an unexposed topic followed by uploading of presentation on the topic for self-study followed by online post-test  ...  : (1) Presupplemental model: First supervised pretest was conducted followed by online post-test on unexposed topic (i.e., before the lecture), (2) postsupplemental model: Lecture, followed by supervised  ...  Specific learning objectives (SLOs) along with standard power point presentations were also prepared for the didactic lectures in all the three models.  ... 
doi:10.5455/njppp.2017.7.0514527052017 fatcat:2suwxt5j5bgjldktqilojxqzqi

Measuring the Interpretability of Unsupervised Representations via Quantized Reverse Probing [article]

Iro Laina, Yuki M. Asano, Andrea Vedaldi
2022 arXiv   pre-print
Self-supervised visual representation learning has recently attracted significant research interest.  ...  Finally, we propose to use supervised classifiers to automatically label large datasets in order to enrich the space of concepts used for probing.  ...  This is particularly true for unsupervised and self-supervised models which are learned without human supervision.  ... 
arXiv:2209.03268v1 fatcat:343bbwcc6vgjbk443z5am2sfp4

SupMPN: Supervised Multiple Positives and Negatives Contrastive Learning Model for Semantic Textual Similarity

Somaiyeh Dehghan, Mehmet Fatih Amasyali
2022 Applied Sciences  
We evaluate our model on standard STS and transfer-learning tasks. The results reveal that SupMPN outperforms state-of-the-art SimCSE and all other previous supervised and unsupervised models.  ...  In this paper, we propose SupMPN: A Supervised Multiple Positives and Negatives Contrastive Learning Model, which accepts multiple hard-positive sentences and multiple hard-negative sentences simultaneously  ...  [34] proposed this model which is a self-supervised contrastive learning method using the redesign of NT-Xent objective with self-guidance.  ... 
doi:10.3390/app12199659 fatcat:j3d26qnqb5bwnds54tfyb7sqnm

An Unsupervised Sentence Embedding Method by Mutual Information Maximization [article]

Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, Lidong Bing
2021 arXiv   pre-print
In this paper, we propose a lightweight extension on top of BERT and a novel self-supervised learning objective based on mutual information maximization strategies to derive meaningful sentence embeddings  ...  It also outperforms SBERT in a setting where in-domain labeled data is not available, and achieves performance competitive with supervised methods on various tasks.  ...  In this work, we propose a novel unsupervised sentence embedding model with light-weight feature extractor on top of BERT for sentence encoding, and train it with a novel self-supervised learning objective  ... 
arXiv:2009.12061v2 fatcat:2anmjvwysjgkrlbhbstvy3lsqq

Beyond Just Vision: A Review on Self-Supervised Representation Learning on Multimodal and Temporal Data [article]

Shohreh Deldari, Hao Xue, Aaqib Saeed, Jiayuan He, Daniel V. Smith, Flora D. Salim
2022 arXiv   pre-print
The popularity of self-supervised learning is driven by the fact that traditional models typically require a huge amount of well-annotated data for training.  ...  learning methods for temporal data.  ...  Almost all self-supervised learning approaches can fit into the workflow depicted.  ... 
arXiv:2206.02353v2 fatcat:ljkxvfxsand43otrpq4effs7cq

Self-Supervised Learning for Recommender System

Chao Huang, Xiang Wang, Xiangnan He, Dawei Yin
2022 Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval  
Recently, self-supervised learning (SSL) has become a promising learning paradigm to distill informative knowledge from unlabeled data, without the heavy reliance on sufficient supervision signals.  ...  In this tutorial, we aim to provide a systemic review of existing self-supervised learning frameworks and analyze the corresponding challenges for various recommendation scenarios, such as general collaborative  ...  Furthermore, we have published several papers on the topic of self-supervised learning for recommendation.  ... 
doi:10.1145/3477495.3532684 fatcat:ifyz3k66nzalvnlr5ltbf2q3cu
« Previous Showing results 1 — 15 out of 146,196 results