Filters








5,132 Hits in 4.8 sec

Loss factorization, weakly supervised learning and label noise robustness [article]

Giorgio Patrini, Frank Nielsen, Richard Nock, Marcello Carioni
2016 arXiv   pre-print
The result tightens known generalization bounds and sheds new light on their interpretation. Factorization has a direct application on weakly supervised learning.  ...  We prove that the empirical risk of most well-known loss functions factors into a linear term aggregating all labels with a term that is label free, and can further be expressed by sums of the loss.  ...  NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Center of Excellence Program.  ... 
arXiv:1602.02450v2 fatcat:3ynh5en4s5fhphvm2mlrowcaiq

Weakly Supervised Learning Meets Ride-Sharing User Experience Enhancement [article]

Lan-Zhe Guo, Feng Kuang, Zhang-Xun Liu, Yu-Feng Li, Nan Ma, Xiao-Hu Qie
2020 arXiv   pre-print
Weakly supervised learning aims at coping with scarce labeled data. Previous weakly supervised studies typically assume that there is only one kind of weak supervision in data.  ...  Robust criteria like AUC rather than accuracy and the validation performance are optimized for the correction of biased data label.  ...  Related Work The problem focused in this paper, i.e. compound weakly supervised learning, more specifically, is the intersection of label noise learning and label distribution bias problem.  ... 
arXiv:2001.09027v1 fatcat:66jqae54x5bn7bzfsfe4s4nsei

IWE-Net: Instance Weight Network for Locating Negative Comments and its application to improve Traffic User Experience

Lan-Zhe Guo, Feng Kuang, Zhang-Xun Liu, Yu-Feng Li, Nan Ma, Xiao-Hu Qie
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Weakly supervised learning aims at coping with scarce labeled data. Previous weakly supervised studies typically assume that there is only one kind of weak supervision in data.  ...  Robust criteria like AUC rather than accuracy and the validation performance are optimized for the correction of biased data label.  ...  Related Work The problem focused in this paper, i.e. compound weakly supervised learning, more specifically, is the intersection of label noise learning and label distribution bias problem.  ... 
doi:10.1609/aaai.v34i04.5823 fatcat:lhychuurenftpmvqnvbtfrthmq

A Closer Look at Weak Label Learning for Audio Events [article]

Ankit Shah, Anurag Kumar, Alexander G. Hauptmann, Bhiksha Raj
2018 arXiv   pre-print
The analysis and understanding of these factors should be taken into picture in the development of future weak label learning methods.  ...  More specifically, we study how characteristics such as label density and corruption of labels affects weakly supervised training for audio events.  ...  However for sounds, weakly supervised learning has only recently come up and the impact and relevance of factors such as noise in the data is yet to be analyzed, understood and explored.  ... 
arXiv:1804.09288v1 fatcat:uqhv4m7u7vb2hjlinsxzmp7ty4

Transferable Curriculum for Weakly-Supervised Domain Adaptation

Yang Shu, Zhangjie Cao, Mingsheng Long, Jianmin Wang
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Thus, weakly-supervised domain adaptation has been introduced to address this difficulty, where we can tolerate the source domain with noises in labels, features, or both.  ...  A thorough evaluation shows that our approach significantly outperforms the state-of-the-art on weakly-supervised domain adaptation tasks.  ...  Acknowledgements This work was supported by the National Key R&D Program of China (No. 2016YFB1000701) and Natural Science Foundation of China (61772299, 61502265, 71690231).  ... 
doi:10.1609/aaai.v33i01.33014951 fatcat:vesnm2caufaabcetavz4e5wwki

From Weakly Supervised Learning to Biquality Learning: an Introduction [article]

Pierre Nodet, Vincent Lemaire, Alexis Bondu, Antoine Cornuéjols, Adam Ouorou
2021 arXiv   pre-print
The field of Weakly Supervised Learning (WSL) has recently seen a surge of popularity, with numerous papers addressing different types of "supervision deficiencies".  ...  Thus we suggest that Biquality Learning framework can be defined as a plan of the WSL cube and propose to re-discover previously unrelated patches in WSL literature as a unified Biquality Learning literature  ...  Below is a non-exhaustive list of common ways to learn a model in the presence of labeling noise 1 : • in case of mariginal noise level, a standard learning algorithm that is natively robust to label noise  ... 
arXiv:2012.09632v3 fatcat:o3a2fcsuc5cdrfdm4cm5qbrsz4

Joint-Modal Label Denoising for Weakly-Supervised Audio-Visual Video Parsing [article]

Haoyue Cheng, Zhaoyang Liu, Hang Zhou, Chen Qian, Wayne Wu, Limin Wang
2022 arXiv   pre-print
This paper focuses on the weakly-supervised audio-visual video parsing task, which aims to recognize all events belonging to each modality and localize their temporal boundaries.  ...  Motivated by two observations that networks tend to learn clean samples first and that a labeled event would appear in at least one modality, we propose a training strategy to identify and remove modality-specific  ...  From the view of loss patterns, the loss of cleanly labeled samples would be lower than noisily labeled ones. 2) Under the weakly supervised training setting, an event label ought not to serve as noise  ... 
arXiv:2204.11573v2 fatcat:vxvw2bpb25g2tetzv5zxfdh4c4

Training Deep Neural Networks on Noisy Labels with Bootstrapping [article]

Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, Andrew Rabinovich
2015 arXiv   pre-print
In experiments we demonstrate that our approach yields substantial robustness to label noise on several datasets. On MNIST handwritten digits, we show that our model is robust to label corruption.  ...  Current state-of-the-art deep learning systems for visual object recognition and detection use purely supervised training with regularization such as dropout to avoid overfitting.  ...  and on other papers on weakly-and semi-supervised deep learning.  ... 
arXiv:1412.6596v3 fatcat:3anl7ywi6fhzva6uzdnhvdk32a

Robust Semisupervised Land-use Classification using Remote Sensing Data with Weak Labels

Rui Wang, Man-On Pun
2021 IEEE Access  
amount of weakly labeled data.  ...  b) for those pixels with weakly labels.  ... 
doi:10.1109/access.2021.3109989 fatcat:xym6kll5tjar7mhwir3odmnc6y

Multimodal Co-learning: Challenges, Applications with Datasets, Recent Advances and Future Directions [article]

Anil Rahate, Rahee Walambe, Sheela Ramanna, Ketan Kotecha
2021 arXiv   pre-print
However, in real-world tasks, typically, it is observed that one or more modalities are missing, noisy, lacking annotated data, have unreliable labels, and are scarce in training or testing and or both  ...  Multimodal machine learning involves multiple aspects: representation, translation, alignment, fusion, and co-learning.  ...  Meta-learning is also robust for label noise and adversarial attacks.  ... 
arXiv:2107.13782v2 fatcat:s4spofwxjndb7leqbcqnwbifq4

Voice activity detection in the wild via weakly supervised sound event detection [article]

Heinrich Dinkel, Yefei Chen, Mengyue Wu, Kai Yu
2020 arXiv   pre-print
In contrast, we propose a general-purpose VAD (GPVAD) framework, which can be easily trained from noisy data in a weakly supervised fashion, requiring only clip-level labels.  ...  One possible bottleneck is that speech in the wild contains unpredictable noise types, hence frame-level label prediction is difficult, which is required for traditional supervised VAD training.  ...  Acknowledgements This work has been supported by National Natural Science Foundation of China (No.61901265) and Shanghai Pujiang Program (No.19PJ1406300).  ... 
arXiv:2003.12222v6 fatcat:isrvpzgds5hapegkrh3phaqvqy

GearNet: Stepwise Dual Learning for Weakly Supervised Domain Adaptation [article]

Renchunzi Xie, Hongxin Wei, Lei Feng, Bo An
2022 arXiv   pre-print
This interactive learning schema enables implicit label noise canceling and exploits correlations between the source and target domains.  ...  This paper studies weakly supervised domain adaptation(WSDA) problem, where we only have access to the source domain with noisy labels, from which we need to transfer useful information to the unlabeled  ...  Lei Feng was supported by the National Natural Science Foundation of China under Grant 62106028 and CAAI-Huawei MindSpore Open Fund.  ... 
arXiv:2201.06001v2 fatcat:l2xslolx4bdg5ami5hdrmpviam

Decoupled Gradient Harmonized Detector for Partial Annotation: Application to Signet Ring Cell Detection [article]

Tiancheng Lin, Yuanfan Guo, Canqian Yang, Jiancheng Yang, Yi Xu
2020 arXiv   pre-print
noise.  ...  Ablation studies and controlled label missing rate experiments demonstrate that DGHM-C loss can bring substantial improvement in partially annotated object detection.  ...  Here, we classify detectors into full-supervised, weakly supervised and noisy-supervised with regard to the quality of labels.  ... 
arXiv:2004.04455v1 fatcat:q3rn2o567vghbb5uitvai223ru

Disentanglement and Generalization Under Correlation Shifts [article]

Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard Zemel, Matthias Bethge
2021 arXiv   pre-print
We then apply our method on real-world datasets based on MNIST and CelebA, and show that it yields models that are disentangled and robust under correlation shift, including in weakly supervised settings  ...  Disentanglement methods aim to learn representations which capture different factors of variation in latent subspaces.  ...  We acknowledge support from the German Federal Ministry of Education and Research (BMBF) through the Competence Center for Machine Learning (FKZ 01IS18039A) and the Bernstein Computational Neuroscience  ... 
arXiv:2112.14754v1 fatcat:izp76w2wffaf7gvdtnjhwkol6u

Learning of Inter-Label Geometric Relationships Using Self-Supervised Learning: Application To Gleason Grade Segmentation [article]

Dwarikanath Mahapatra
2021 arXiv   pre-print
We propose a method to synthesize for PCa histopathology images by learning the geometrical relationship between different disease labels using self-supervised learning.  ...  as fully supervised learning.  ...  Self-Supervised Learning Self-supervised learning methods consist of two distinct approaches: 1) pretext tasks and 2) loss functions used for down-stream tasks.  ... 
arXiv:2110.00404v1 fatcat:276y4oi2nzeqtmslyvs3kmvdrm
« Previous Showing results 1 — 15 out of 5,132 results