Filters








107,031 Hits in 4.1 sec

Robust Multi-View Boosting with Priors [chapter]

Amir Saffari, Christian Leistner, Martin Godec, Horst Bischof
2010 Lecture Notes in Computer Science  
Since the priors may contain a significant amount of noise, we introduce a new loss function for the unlabeled regularization which is robust to noisy priors.  ...  We also show that the new proposed loss function is more robust compared to other alternatives.  ...  Semi-Supervised Boosting with Robust Loss Functions We now focus on developing the multi-class semi-supervised boosting algorithms based on the concept of learning from priors [16, 17] .  ... 
doi:10.1007/978-3-642-15558-1_56 fatcat:e4c425ighvbg7blsey3cgayoae

Semi-Supervised Robust Deep Neural Networks for Multi-Label Classification

Hakan Cevikalp, Burak Benligiray, Ömer Nezih Gerek, Hasan Saribas
2019 Computer Vision and Pattern Recognition  
In this paper, we propose a robust method for semisupervised training of deep neural networks for multi-label image classification.  ...  Using a robust loss function becomes crucial here, as the initial label propagations may include many errors, which degrades the performance of non-robust loss functions.  ...  Acknowledgments: The authors would like to thank NVIDIA for Tesla K40 GPU donation used in this study.  ... 
dblp:conf/cvpr/CevikalpBGS19 fatcat:fgavi3hyqvhmjc7bqs6rt7e2ce

Robust Loss Functions under Label Noise for Deep Neural Networks [article]

Aritra Ghosh, Himanshu Kumar, P.S. Sastry
2017 arXiv   pre-print
Through experiments, we illustrate the robustness of risk minimization with such loss functions for learning neural networks.  ...  For binary classification there exist theoretical results on loss functions that are robust to label noise.  ...  Conclusion In this paper, we derived some theoretical results on robustness of loss functions in multi-class classification.  ... 
arXiv:1712.09482v1 fatcat:sq2kbgg5evb3rjfv3nyg42imbq

Training Multi-Task Adversarial Network for Extracting Noise-Robust Speaker Embedding [article]

Jianfeng Zhou, Tao Jiang, Lin Li, Qingyang Hong, Zhe Wang, Bingyin Xia
2019 arXiv   pre-print
Motivated by the promising performance of multi-task training in a variety of image processing tasks, we explore the potential of multi-task adversarial training for learning a noise-robust speaker embedding  ...  Besides, we propose a training strategy using the training accuracy as an indicator to stabilize the multi-class adversarial optimization process.  ...  AL-Loss Inspired by the FL-Loss function, we propose the AL-Loss function combined with the cross entropy loss function for the multi-class adversarial task, which is formulated as follow: l al = − 1 N  ... 
arXiv:1811.09355v2 fatcat:zgguy2as4jbrxlrz7b6kpobhvu

Addressing Imbalance in Weakly Supervised Multi-label Learning

Fang-Fang Luo, Wen-Zhong Guo, Guo-Long Chen
2019 IEEE Access  
INDEX TERMS Multi-label learning, missing labels, class imbalance, robust loss function.  ...  Nevertheless, the label for each training example is assumed complete in most of the current multi-label learning methods.  ...  However, when average top k = 5 loss is selected as the aggregation loss function, the obtained classifier can better protect minority class samples and is more robust to noise data than the maximum loss  ... 
doi:10.1109/access.2019.2906409 fatcat:ea7vyqxnmfbefnnrmavmbmybr4

Multi-Class Triplet Loss With Gaussian Noise for Adversarial Robustness

Benjamin Appiah, Edward Y. Baagyere, Kwabena Owusu-Agyemang, Zhiguang Qin, Muhammed Amin Abdullah
2020 IEEE Access  
Our purposed method regularizes the DNN classifiers representation space with Multi-class Triplet loss function to learn a feature representation that detects adversarial and clean images similarity and  ...  Figure 1 shows our modified Multi-class Triplet loss for adversarial training.  ... 
doi:10.1109/access.2020.3024244 fatcat:fu2d7zmpmng53ojr65ximucqv4

Gradient Adversarial Training of Neural Networks [article]

Ayan Sinha, Zhao Chen, Vijay Badrinarayanan, Andrew Rabinovich
2018 arXiv   pre-print
tensor; and (3) for multi-task learning we classify the gradient tensors derived from different task loss functions and tune them to be statistically indistinguishable.  ...  soft targets, and boosts multi-task learning by aligning the gradient tensors derived from the task specific loss functions.  ...  9:θ(j) →θ(j + 1) Update weights in task classifier network usingĆ 10: Figure 3 : GREAT for multi-task learning.  ... 
arXiv:1806.08028v1 fatcat:pjqvv7ylevbcfb6y3c7zf7gqda

Learning from Positive and Unlabeled Data with Augmented Classes [article]

Zhongnian Li, Liutao Yang, Zhongchen Ma, Tongfeng Sun, Xinzheng Xu, Daoqiang Zhang
2022 arXiv   pre-print
Positive Unlabeled (PU) learning aims to learn a binary classifier from only positive and unlabeled data, which is utilized in many real-world scenarios.  ...  In this paper, we propose an unbiased risk estimator for PU learning with Augmented Classes (PUAC) by utilizing unlabeled data from the augmented classes distribution, which can be easily collected in  ...  Many margin loss functions satisfy consistency properties for multi-class problem, such as square loss φ(z) = (1 − z) 2 .  ... 
arXiv:2207.13274v1 fatcat:vvbguwycubdopeul2pu3rn22k4

Towards Robust Pattern Recognition: A Review [article]

Xu-Yao Zhang, Cheng-Lin Liu, Ching Y. Suen
2020 arXiv   pre-print
Actually, our brain is robust at learning concepts continually and incrementally, in complex, open and changing environments, with different contexts, modalities and tasks, by showing only a few examples  ...  robust pattern recognition.  ...  Since each task will produce a task-specific loss function, generally, as soon as you find yourself optimizing more than one loss function, you are effectively doing multi-task learning [236] .  ... 
arXiv:2006.06976v1 fatcat:mn35i7bmhngl5hxr3vukdcmmde

TransientBoost: On-line boosting with transient data

Sabine Sternig, Martin Godec, Peter M. Roth, Horst Bischof
2010 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops  
This is realized by using an internal multi-class representation and modeling reliable and unreliable data in separate classes.  ...  For on-line learning algorithms, which are applied in many vision tasks such as detection or tracking, robust integration of unlabeled samples is a crucial point.  ...  Moreover, since for both approaches the loss functions can be changed on the fly, to increase the robustness for the negative updates and positive updates an exponential and a logit loss function are applied  ... 
doi:10.1109/cvprw.2010.5543880 dblp:conf/cvpr/SternigGRB10 fatcat:w5mh3ynhozbhtoetns3mqg5ahu

Robust Feature Learning on Long-Duration Sounds for Acoustic Scene Classification [article]

Yuzhong Wu, Tan Lee
2021 arXiv   pre-print
The auxiliary classifier is trained with an auxiliary loss function that assigns less learning weight to poorly classified examples than the standard cross-entropy loss.  ...  For a more robust ASC system, We propose a robust feature learning (RFL) framework to train the CNN. The RFL framework down-weights CNN learning specifically on long-duration sounds.  ...  Cross-Entropy Loss for CNN Training The expression of the CE loss function for ground-truth class is L CE (p) = −log(p), where p is the output probability for the ground-truth class.  ... 
arXiv:2108.05008v1 fatcat:a6gadbjltrbm7lqan27ngnzpz4

Learning to Learn and Predict: A Meta-Learning Approach for Multi-Label Classification

Jiawei Wu, Wenhan Xiong, William Yang Wang
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
The training policies are then used to train the classifier with the cross-entropy loss function, and the prediction policies are further implemented for prediction.  ...  However, most of the existing models are trained with the standard cross-entropy loss function and use a fixed prediction policy (e.g., a threshold of 0.5) for all the labels, which completely ignores  ...  Acknowledgments The authors would like to thank the anonymous reviewers for their thoughtful comments.  ... 
doi:10.18653/v1/d19-1444 dblp:conf/emnlp/WuXW19 fatcat:umfquqk6kjca3l667sn2nghc2m

Improved cross entropy loss for noisy labels in vision leaf disease classification

Yipeng Chen, Ke Xu, Peng Zhou, Xiaojuan Ban, Di He
2022 IET Image Processing  
To cope with the combination of the two noises, label smoothing is blended in the point of increasing entropy for uncertain labels, with Taylor cross entropy loss, which is proved to be efficient to solve  ...  The predictive performance of supervised learning algorithms depends on the quality of labels.  ...  the label over K classes associated with x i , a supervised learning classifier seeks to learn a function f ∶ X → Y which maps the input space to the label space.  ... 
doi:10.1049/ipr2.12402 fatcat:mvlkih5xqzdfxbuqkfye623hmy

Learning by Minimizing the Sum of Ranked Range [article]

Shu Hu, Yiming Ying, Xin Wang, Siwei Lyu
2020 arXiv   pre-print
We explore two applications in machine learning of the minimization of the SoRR framework, namely the AoRR aggregate loss for binary classification and the TKML individual loss for multi-label/multi-class  ...  Such cases occur in the aggregate loss, which combines individual losses of a learning model over each training sample, and in the individual loss for multi-label learning, which combines prediction scores  ...  We are grateful to all anonymous reviewers for their constructive comments.  ... 
arXiv:2010.01741v1 fatcat:4u6rnhsnzrg5vgozimnvg6o5je

Rademacher Complexity for Adversarially Robust Generalization [article]

Dong Yin and Kannan Ramchandran and Peter Bartlett
2020 arXiv   pre-print
The results also extend to multi-class linear classifiers. For (nonlinear) neural networks, we show that the dimension dependence in the adversarial Rademacher complexity also exists.  ...  We further consider a surrogate adversarial loss for one-hidden layer ReLU network and prove margin bounds for this setting.  ...  The authors would like to thank Justin Gilmer for helpful discussion.  ... 
arXiv:1810.11914v4 fatcat:j64ipvkbsff2jpqfs7b5b7fetq
« Previous Showing results 1 — 15 out of 107,031 results