95,938 Hits in 7.0 sec

Pseudo Label Is Better Than Human Label [article]

Dongseong Hwang, Khe Chai Sim, Zhouyuan Huo, Trevor Strohman
2022 arXiv   pre-print
using human labels.  ...  This model achieved 4.0% word error rate (WER) on a voice search task, 11.1% relatively better than a baseline.  ...  Human label vs Pseudo label In Sections 3.2.1 and 3.2.2, we have shown that using pseudo labels for training is better than using human labels.  ... 
arXiv:2203.12668v3 fatcat:cgcqnldibva5fk2w6jcbstey34

Pseudo Pixel-level Labeling for Images with Evolving Content [article]

Sara Mousavi, Zhenning Yang, Kelley Cross, Dawnie Steadman, Audris Mockus
2021 arXiv   pre-print
We leverage the evolving nature of images depicting the decay process in human decomposition data to design a simple yet effective pseudo-pixel-level label generation technique to reduce the amount of  ...  To evaluate the quality of our pseudo-pixel-level labels, we train two semantic segmentation models with VGG and ResNet backbones on images labeled using our pseudo labeling method and those of a state-of-the-art  ...  A single image from each sequence is presented to a human annotator to be manually annotated. Second, we generate CAM-based pseudo-pixel-level labels for the available weakly labeled images [5, 1] .  ... 
arXiv:2105.09975v1 fatcat:ueixnuia3nc5dnffz4z43rcnoq

Cross-Domain Adaptation for Animal Pose Estimation [article]

Jinkun Cao, Hongyang Tang, Hao-Shu Fang, Xiaoyong Shen, Cewu Lu and Yu-Wing Tai
2019 arXiv   pre-print
Therefore, the easily available human pose dataset, which is of a much larger scale than our labeled animal dataset, provides important prior knowledge to boost up the performance on animal pose estimation  ...  Considering the heavy labor needed to label dataset and it is impossible to label data for all concerned animal species, we, therefore, proposed a novel cross-domain adaptation method to transform the  ...  Acknowledgement This work is supported in part by the National Key R&D Program of China, No.2017YFA0700800, National Natural Science Foundation of China under Grants 61772332.  ... 
arXiv:1908.05806v2 fatcat:ym63kzee4vabrjghhroviewytu

Naive-Student: Leveraging Semi-Supervised Learning in Video Sequences for Urban Scene Segmentation [article]

Liang-Chieh Chen, Raphael Gontijo Lopes, Bowen Cheng, Maxwell D. Collins, Ekin D. Cubuk, Barret Zoph, Hartwig Adam, Jonathon Shlens
2020 arXiv   pre-print
Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data. The procedure is iterated for several times.  ...  This limitation is particularly notable for image segmentation tasks, where the expense of human annotation is especially large, yet large amounts of unlabeled data may exist.  ...  We think it is because of the inconsistent annotations between human-labeled and pseudo-labeled images, since train-fine is a subset of train-sequence.  ... 
arXiv:2005.10266v4 fatcat:dgq3kf7j4fdi3kejbdnbyhv3lm

Few Shots Are All You Need: A Progressive Few Shot Learning Approach for Low Resource Handwriting Recognition [article]

Mohamed Ali Souibgui, Alicia Fornés, Yousri Kessentini, Beáta Megyesi
2022 arXiv   pre-print
approach that automatically assigns pseudo-labels to the non-annotated data.  ...  A second training step is then applied to diminish the gap between the source and target data.  ...  Also, for common scripts but with few labeled data, pseudo-labels can be predicted to train usual HTRs, which may lead to better results than the few-shot ones.  ... 
arXiv:2107.10064v2 fatcat:go6ohcvk7rfc3keez6uplnjdqu

Self-Supervised Learning for Human Pose Estimation in Sports

Katja Ludwig, Sebastian Scherer, Moritz Einfalt, Rainer Lienhart
2021 2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)  
The first proposed method uses pseudo labels as a self-supervised training technique together with a filtering method of the pseudo labels.  ...  Human pose estimation (HPE) is a commonly used technique to determine derived parameters that are important to improve the performance of athletes in many sports disciplines.  ...  The table shows that the mean teacher results are slightly worse than the pseudo label results after the final iterations, but perform a lot better than the supervised training on 50 images.  ... 
doi:10.1109/icmew53276.2021.9456000 fatcat:iebxmfrvibh53h2osn2mrsb3i4

Semi-Supervised Active Learning for COVID-19 Lung Ultrasound Multi-symptom Classification [article]

Lei Liu, Wentao Lei, Yongfang Luo, Cheng Feng, Xiang Wan, Li Liu
2021 arXiv   pre-print
On this basis, a multi-symptom multi-label (MSML) classification network is proposed to learn discriminative features of lung symptoms, and a human-machine interaction is exploited to confirm the final  ...  The core component of TSAL is the multi-label learning mechanism, in which label correlations information is used to design multi-label margin (MLM) strategy and confidence validation for automatically  ...  For each symptom, if its prediction probability is higher than the threshold, its pseudo label is set as "1", otherwise it is "0".  ... 
arXiv:2009.05436v2 fatcat:puomfpz4yvezhnwndy7szbc6ti

Analysis of Semi-Supervised Methods for Facial Expression Recognition [article]

Shuvendu Roy, Ali Etemad
2022 arXiv   pre-print
We conduct comparative study on eight semi-supervised learning methods, namely Pi-Model, Pseudo-label, Mean-Teacher, VAT, MixMatch, ReMixMatch, UDA, and FixMatch, on three FER datasets (FER13, RAF-DB,  ...  Training deep neural networks for image recognition often requires large-scale human annotated data.  ...  This performance is 11% better than the fully-supervised model trained with the same amount of labeled samples.  ... 
arXiv:2208.00544v1 fatcat:ie5t4cbez5fd5b3vjys5rdzmv4

Bootstrap Your Object Detector via Mixed Training [article]

Mengde Xu, Zheng Zhang, Fangyun Wei, Yutong Lin, Yue Cao, Stephen Lin, Han Hu, Xiang Bai
2021 arXiv   pre-print
In addition, it addresses localization noise and missing labels in human annotations by incorporating pseudo boxes that can compensate for these errors.  ...  boxes thanks to the robustness of neural networks to labeling error.  ...  In addition, a pseudo box mechanism is introduced to address label noise in human annotation.  ... 
arXiv:2111.03056v1 fatcat:rdy2hsvoungcrm3hw6jlbfbkkm

Improving Semantic Segmentation via Self-Training [article]

Yi Zhu, Zhongyue Zhang, Chongruo Wu, Zhi Zhang, Tong He, Hang Zhang, R. Manmatha, Mu Li, Alexander Smola
2020 arXiv   pre-print
Our robust training framework can digest human-annotated and pseudo labels jointly and achieve top performances on Cityscapes, CamVid and KITTI datasets while requiring significantly less supervision.  ...  We first train a teacher model on labeled data, and then generate pseudo labels on a large set of unlabeled data.  ...  In terms of labels, our experiments show that using hard labels in general performs better than using soft labels.  ... 
arXiv:2004.14960v2 fatcat:32gbkc2buraibhfaathn5dcgvq

Fashion Landmark Detection in the Wild [article]

Ziwei Liu, Sijie Yan, Ping Luo, Xiaogang Wang, Xiaoou Tang
2016 arXiv   pre-print
To encourage future studies, we introduce a fashion landmark dataset with over 120K images, where each image is labeled with eight landmarks.  ...  Fashion landmark is also compared to clothing bounding boxes and human joints in two applications, fashion attribute prediction and clothes retrieval, showing that fashion landmark is a more discriminative  ...  In stage-1, we find that using soft label, denoted as '+p. labels (T = 20)', instead of hard label, denoted as '+p. labels (T = 1)', results in better performance, because soft label is more informative  ... 
arXiv:1608.03049v1 fatcat:tvrfez4tlranvdff6s42zax55e

Self-Supervised Animation Synthesis Through Adversarial Training

Cheng Yu, Wenmin Wang, Jianhao Yan
2020 IEEE Access  
Our experimental results show that an appropriate number of the pseudo-label dimensions can better characterize animation features.  ...  The labels learning model can be implemented on a large number of out-oforder samples to generate two types of pseudo-labels, discrete labels and continuous labels.  ...  The overall effect of our method is better than TGAN. The specific evaluation can be seen in Tab. 5.  ... 
doi:10.1109/access.2020.3008523 fatcat:ktuwmb3vvjcuzdyxtq37maz26i

The GIST and RIST of Iterative Self-Training for Semi-Supervised Segmentation [article]

Eu Wern Teh, Terrance DeVries, Brendan Duke, Ruowei Jiang, Parham Aarabi, Graham W. Taylor
2022 arXiv   pre-print
We show that iterative self-training leads to performance degradation if done na\"ively with a fixed ratio of human-labeled to pseudo-labeled training examples.  ...  We propose Greedy Iterative Self-Training (GIST) and Random Iterative Self-Training (RIST) strategies that alternate between training on either human-labeled data or pseudo-labeled data at each refinement  ...  combination of human-labeled data and pseudo-labels.  ... 
arXiv:2103.17105v3 fatcat:meon2vt2mzb5xllhjrg53hpdhq

Open Vocabulary Object Detection with Pseudo Bounding-Box Labels [article]

Mingfei Gao, Chen Xing, Juan Carlos Niebles, Junnan Li, Ran Xu, Wenhao Liu, Caiming Xiong
2022 arXiv   pre-print
Our method leverages the localization ability of pre-trained vision-language models to generate pseudo bounding-box labels and then directly uses them for training object detectors.  ...  Code is available at  ...  Our method generates pseudo bounding box labels to alleviate human labeling efforts.  ... 
arXiv:2111.09452v3 fatcat:74o64jnhxfdrhetlez27ehnwiq

G2L: A Geometric Approach for Generating Pseudo-labels that Improve Transfer Learning [article]

John R. Kender, Bishwaranjan Bhattacharjee, Parijat Dube, Brian Belgodere
2022 arXiv   pre-print
Transfer learning is a deep-learning technique that ameliorates the problem of learning when human-annotated labels are expensive and limited.  ...  We generate pseudo-labels according to an efficient and extensible algorithm that is based on a classical result from the geometry of high dimensions, the Cayley-Menger determinant.  ...  In the cases where pseudo-labeling schemes did better than vanilla ImageNet1K, learning rates of 0.015 did best.  ... 
arXiv:2207.03554v1 fatcat:7uhwocfd65c7rijtqdleoi6bmm
« Previous Showing results 1 — 15 out of 95,938 results