Filters








7,186 Hits in 5.0 sec

Neural Ranking Models with Weak Supervision [article]

Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, W. Bruce Croft
2017 arXiv   pre-print
To this aim, we use the output of an unsupervised ranking model, such as BM25, as a weak supervision signal.  ...  The reason may be the complexity of the ranking problem, as it is not obvious how to learn from queries and documents when no supervised signal is available.  ...  Other experiment is to leverage multiple weak supervision signals from di erent sources.  ... 
arXiv:1704.08803v2 fatcat:lapuhiulofajflk52muufru66i

Investigating Weak Supervision in Deep Ranking

Yukun Zheng, Yiqun Liu, Zhen Fan, Cheng Luo, Qingyao Ai, Min Zhang, Shaoping Ma
2019 Data and Information Management  
This work reveals the potential of constructing better document retrieval systems based on multiple kinds of weak relevance signals.  ...  We further proposed a cascade ranking framework to combine the two weakly supervised relevance, which significantly promotes the ranking performance of neural ranking models and outperforms the best result  ...  What is the difference between click label and BM25 label as a weak supervision signal?  ... 
doi:10.2478/dim-2019-0010 fatcat:gjutpp777vdvljqxtf2r6nvuwy

Passage Ranking with Weak Supervision [article]

Peng Xu, Xiaofei Ma, Ramesh Nallapati, Bing Xiang
2019 arXiv   pre-print
In this paper, we propose a weak supervision framework for neural ranking tasks based on the data programming paradigm , which enables us to leverage multiple weak supervision signals from different sources  ...  We train a BERT-based passage-ranking model (which achieves new state-of-the-art performances on two benchmark datasets with full supervision) in our weak supervision framework.  ...  The idea of weak supervision is to extract signals from the noisy labels to train our model. Dehghani et al. (2017) first applied weak supervision technique to train deep neural ranking models.  ... 
arXiv:1905.05910v2 fatcat:aoefcfrpffhpzom7vbjs7clovy

Avoiding Your Teacher's Mistakes: Training Neural Networks with Controlled Weak Supervision [article]

Mostafa Dehghani, Aliaksei Severyn, Sascha Rothe, Jaap Kamps
2017 arXiv   pre-print
This makes weak supervision attractive, using weak or noisy signals like the output of heuristic methods or user click-through data for training.  ...  In a semi-supervised setting, we can use a large set of data with weak labels to pretrain a neural network and then fine-tune the parameters with a small amount of data with true labels.  ...  Or instead of using alternating sampling, we tried training the target network using controlled weak supervision signals after the confidence network is fully trained.  ... 
arXiv:1711.00313v2 fatcat:22ghuuvlofcjtgayffw7gebbeu

Share your Model instead of your Data: Privacy Preserving Mimic Learning for Ranking [article]

Mostafa Dehghani, Hosein Azarbonyad, Jaap Kamps, Maarten de Rijke
2017 arXiv   pre-print
., using predictions from a privacy preserving trained model instead of labels from the original sensitive training data as a supervision signal.  ...  Deep neural networks have become a primary tool for solving problems in many fields. They are also used for addressing information retrieval problems and show strong performance in several tasks.  ...  Results in the table suggest that using the noisy aggregation of multiple teachers as the supervision signal, we can train a neural ranker with an acceptable performance.  ... 
arXiv:1707.07605v1 fatcat:s66vtqn4rrhpfglb3dj4ebtytu

Dare to share your information!

Trina Innes
2002 Forestry Chronicle  
., using predictions from a privacy preserving trained model instead of labels from the original sensitive training data as a supervision signal.  ...  ABSTRACT Deep neural networks have become a primary tool for solving problems in many elds. ey are also used for addressing information retrieval problems and show strong performance in several tasks.  ...  Results in the table suggest that using the noisy aggregation of multiple teachers as the supervision signal, we can train a neural ranker with an acceptable performance.  ... 
doi:10.5558/tfc78090-1 fatcat:6jojcp7b45bnncbmvqs2mc5pba

Towards Theoretical Understanding of Weak Supervision for Information Retrieval [article]

Hamed Zamani, W. Bruce Croft
2018 arXiv   pre-print
We briefly review a set of our recent theoretical findings that shed light on learning from weakly supervised data, and provide guidelines on how train learning to rank models with weak supervision.  ...  However, neural approaches often require large volumes of training data to perform effectively, which is not always available.  ...  [19] studied the problem of learning from multiple weak supervision labels to achieve state-of-the-art results on the query performance prediction task.  ... 
arXiv:1806.04815v1 fatcat:kwowb2adkjfo5drxwndklqhkly

Recommending Courses in MOOCs for Jobs: An Auto Weak Supervision Approach [article]

Bowen Hao, Jing Zhang, Cuiping Li, Hong Chen, Hongzhi Yin
2020 arXiv   pre-print
Despite the advances of supervised ranking models, the lack of enough supervised signals prevents us from directly learning a supervised ranking model.  ...  On the one hand, the framework enables training multiple supervised ranking models upon the pseudo labels produced by multiple unsupervised ranking models.  ...  [9] train a neural query performance predictor by multiple weak supervision signals, and they also provide a theoretical analysis of this weak supervision method [18] .  ... 
arXiv:2012.14234v1 fatcat:zqi5gavrujhr3jypi35u6hx3ei

Unified Semantic Parsing with Weak Supervision

Priyanka Agrawal, Ayushi Dalmia, Parag Jain, Abhishek Bansal, Ashish Mittal, Karthik Sankaranarayanan
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
To solve this, we incorporate a multipolicy distillation mechanism in which we first train domain-specific semantic parsers (teachers) using weak supervision in the absence of the ground truth programs  ...  To overcome this, we propose a novel framework to build a unified multi-domain enabled semantic parser trained only with weak supervision (denotations).  ...  We observe that due to weak signal strength and enlarged search space from multiple domains, WEAK-COMBINED model performs poorly across domains.  ... 
doi:10.18653/v1/p19-1473 dblp:conf/acl/AgrawalDJBMS19 fatcat:shwrsvoqfbag7kzhtrc7gvvm64

Coupling Distributed and Symbolic Execution for Natural Language Queries [article]

Lili Mou, Zhengdong Lu, Hang Li, Zhi Jin
2017 arXiv   pre-print
An executor for table querying typically requires multiple steps of execution because queries may have complicated structures.  ...  Building neural networks to query a knowledge base (a table) with natural language is an emerging research topic in deep learning.  ...  In machine translation, Mi et al. (2016) use alignment heuristics to train the attention signal of neural networks in a supervised manner.  ... 
arXiv:1612.02741v4 fatcat:iv2fnigvdrbzbfjnynpljqksqy

Leveraging Multi-Source Weak Social Supervision for Early Detection of Fake News [article]

Kai Shu, Guoqing Zheng, Yichuan Li, Subhabrata Mukherjee, Ahmed Hassan Awadallah, Scott Ruston, Huan Liu
2020 arXiv   pre-print
In this work, we exploit multiple weak signals from different sources given by user and content engagements (referred to as weak social supervision), and their complementary utilities to detect fake news  ...  We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak  ...  Weak signals are used as constraints to regularize prediction models [30] , or as loss correction mechanisms [6] . Often only a single source of weak labels is used.  ... 
arXiv:2004.01732v1 fatcat:djfuown67zbxpcjygi7mv2rqde

CALDA: Improving Multi-Source Time Series Domain Adaptation with Contrastive Adversarial Learning [article]

Garrett Wilson, Janardhan Rao Doppa, Diane J. Cook
2021 arXiv   pre-print
In cases where meta-domain information such as label distributions is available, weak supervision can further boost performance. We propose a novel framework, CALDA, to tackle these two problems.  ...  Weak supervision further improves performance, even in the presence of noise, allowing CALDA to offer generalizable strategies for MS-UDA. Code is available at: https://github.com/floft/calda  ...  [24] propose using a different form of weak supervision for human activity recognition from video data, using multiple incomplete or uncertain labels. Pathak et al.  ... 
arXiv:2109.14778v1 fatcat:7cn2m2d3dfevjfshgrmdcwrwle

Weakly Supervised Few-shot Object Segmentation using Co-Attention with Visual and Semantic Embeddings

Mennatullah Siam, Naren Doraiswamy, Boris N. Oreshkin, Hengshuai Yao, Martin Jagersand
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
Our results show that few-shot segmentation benefits from utilizing word embeddings, and that we are able to perform few-shot segmentation using stacked joint visual semantic processing with weak image-level  ...  It also outperforms state-of-the-art methods that use weak bounding box supervision on PASCAL-5^i.  ...  Our method iteratively guides a bidirectional co-attention between the support and the query sets using both visual and neural word embedding inputs, using only image-level supervision as shown in Fig  ... 
doi:10.24963/ijcai.2020/120 dblp:conf/ijcai/SiamDOYJ20 fatcat:3sgxelamqvaorjkl64l4vc7vdm

Weakly Supervised Few-shot Object Segmentation using Co-Attention with Visual and Semantic Embeddings [article]

Mennatullah Siam, Naren Doraiswamy, Boris N. Oreshkin, Hengshuai Yao, Martin Jagersand
2020 arXiv   pre-print
Our results show that few-shot segmentation benefits from utilizing word embeddings, and that we are able to perform few-shot segmentation using stacked joint visual semantic processing with weak image-level  ...  It also outperforms state-of-the-art methods that use weak bounding box supervision on PASCAL-5i.  ...  Our method iteratively guides a bidirectional co-attention between the support and the query sets using both visual and neural word embedding inputs, using only image-level supervision as shown in Fig  ... 
arXiv:2001.09540v3 fatcat:tw7bceylr5bslhyn76msotbl3e

The Word is Mightier than the Label: Learning without Pointillistic Labels using Data Programming [article]

Chufan Gao, Mononito Goswami
2021 arXiv   pre-print
Recently, some studies have explored the use of diverse sources of weak supervision to produce competitive end model classifiers.  ...  In this paper, we survey recent work on weak supervision, and in particular, we investigate the Data Programming (DP) framework.  ...  Our results over multiple text classification datasets reveal that weak supervision performs at par with its fully supervised counterpart without access to pointillistic ground truth labels.  ... 
arXiv:2108.10921v2 fatcat:h7q6ggmnlvgzhpwftoziobnzlq
« Previous Showing results 1 — 15 out of 7,186 results