A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Learning from Pixel-Level Noisy Label : A New Perspective for Light Field Saliency Detection
[article]
2022
arXiv
pre-print
In this paper, we propose to learn light field saliency from pixel-level noisy labels obtained from unsupervised hand crafted featured based saliency methods. ...
Given this goal, a natural question is: can we efficiently incorporate the relationships among light field cues while identifying clean labels in a unified framework? ...
Unlike [28, 48, 55] , [49, 54] deals with learning from a single noisy labelling in a much more efficient way. [49] learns saliency prediction and robust fitting models to identify inliers. [54] proposes ...
arXiv:2204.13456v1
fatcat:np3aj7mzingu3gt46zwym5emha
Activation to Saliency: Forming High-Quality Labels for Completely Unsupervised Salient Object Detection
[article]
2021
arXiv
pre-print
In the second stage, a self-rectification learning paradigm strategy is developed to train a saliency detector and refine the pseudo labels online. ...
In order to overcome these shortcomings, we propose a new two-stage Activation-to-Saliency (A2S) framework that effectively excavates high-quality saliency cues to train a robust saliency detector. ...
Moreover, instead of extracting noisy saliency cues using traditional SOD methods, we present a novel perspective to excavate high-quality saliency cues based on the learned features of a pre-trained network ...
arXiv:2112.03650v3
fatcat:u2lquovonvhhbn2o3lhmqsf424
Model-agnostic Approaches to Handling Noisy Labels When Training Sound Event Classifiers
[article]
2019
arXiv
pre-print
In this work, we evaluate simple and efficient model-agnostic approaches to handling noisy labels when training sound event classifiers, namely label smoothing regularization, mixup and noise-robust loss ...
While learning from noisy labels has been an active area of research in computer vision, it has received little attention in sound event classification. ...
In [16] , two networks operating on different views of the data co-teach each other to learn from noisy labels. ...
arXiv:1910.12004v1
fatcat:rpu72uaxmzgn3m24taonrlx6py
Multi-Label Learning from Single Positive Labels
2021
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
We explore this special case of learning from missing labels across four different multi-label image classification datasets for both linear classifiers and end-to-end finetuned deep networks. ...
Predicting all applicable labels for a given image is known as multi-label classification. ...
Since it is easier to train image classifiers on informative labels than uninformative ones, we hypothesize that correct labels are a "good choice" from the algorithm's perspective. ...
doi:10.1109/cvpr46437.2021.00099
fatcat:yupyc7xi5rao3ac5qiixmlr2oq
Multi-Label Learning from Single Positive Labels
[article]
2021
arXiv
pre-print
We explore this special case of learning from missing labels across four different multi-label image classification datasets for both linear classifiers and end-to-end fine-tuned deep networks. ...
Predicting all applicable labels for a given image is known as multi-label classification. ...
In the multi-class setting, [26] proposes to learn from complementary labels i.e. they assume access to a single negative label per item that specifies that the item does not belong to a given class. ...
arXiv:2106.09708v2
fatcat:xao37rffsbcepdrvtsh4yiidzm
Interactive Label Cleaning with Example-based Explanations
[article]
2021
arXiv
pre-print
We tackle sequential learning under label noise in applications where a human supervisor can be queried to relabel suspicious examples. ...
Whenever it detects a suspicious example, Cincer identifies a counter-example in the training set that -- according to the model -- is maximally incompatible with the suspicious example, and asks the annotator ...
Typical strategies to learning from noisy labels include discarding or downweighting suspicious examples and employing models robust to noise [33, 1, 34, 2] , often requiring a non-trivial noise ratio ...
arXiv:2106.03922v3
fatcat:jdi7yfmyifcdpglgdb4v375vmu
Label Cleaning Multiple Instance Learning: Refining Coarse Annotations on Single Whole-Slide Images
[article]
2021
arXiv
pre-print
the state-of-the-art alternatives, even while learning from a single slide. ...
We present a method, named Label Cleaning Multiple Instance Learning (LC-MIL), to refine coarse annotations on a single WSI without the need of external training data. ...
Adam Charles, Haoyang Mi and Jacopo Teneggi from the Department of Biomedical Engineering, Johns Hopkins University, for their useful advice and discussions. ...
arXiv:2109.10778v1
fatcat:dzf2maywbza63hel7v2okd57wi
Debugging Frame Semantic Role Labeling
[article]
2019
arXiv
pre-print
We propose a quantitative and qualitative analysis of the performances of statistical models for frame semantic structure extraction. ...
We report on the robustness of a recent statistical classifier for frame semantic parsing to lexical configurations of predicate-argument structures, relying on an artificially augmented dataset generated ...
They then used a conditional log-linear model over spans for each role of each evoked where the kappa statistic is used, FrameNet annotators do not have to choose among a fixed pool of label for each annotated ...
arXiv:1901.07475v1
fatcat:7sktiu6yjnawnomsqvh5sgde7u
Hybrid Variability Aware Network (HVANet): A Self-Supervised Deep Framework for Label-Free SAR Image Change Detection
2022
Remote Sensing
unsupervised label-free SAR image CD by taking inspiration from recent developments in deep self-supervised learning. ...
In this paper, we argue that these internal hybrid variabilities can also be used for learning stronger feature representation, and we propose a hybrid variability aware network (HVANet) for completely ...
samples is overly simple, and ii) noisy labels are inevitable. ...
doi:10.3390/rs14030734
fatcat:z6gojgx3lnbire6rqcnd32oh44
DeepUSPS: Deep Robust Unsupervised Saliency Prediction With Self-Supervision
[article]
2021
arXiv
pre-print
In this work, we propose a two-stage mechanism for robust unsupervised object saliency prediction, where the first stage involves refinement of the noisy pseudo labels generated from different handcrafted ...
Each handcrafted method is substituted by a deep network that learns to generate the pseudo labels. ...
From the robust learning perspective, ? proposes a robust way to learn from wrongly annotated datasets for classification tasks. ...
arXiv:1909.13055v4
fatcat:wdx3l53gczcsnbdk4t6ed366ae
Cross-Modal Ranking with Soft Consistency and Noisy Labels for Robust RGB-T Tracking
[chapter]
2018
Lecture Notes in Computer Science
Moreover, we propose a single unified optimization algorithm to solve the proposed model with stable and efficient convergence behavior. ...
Second, we propose an optimal query learning method to handle label noises of queries. ...
Different from these works, we propose a novel cross-modal ranking algorithm for RGB-T tracking from a new perspective. In particular, our approach has the following advantages. i) Generality. ...
doi:10.1007/978-3-030-01261-8_49
fatcat:fccrpevqbzcc3di2btafnvxyxy
Dense Semantic Labeling of Subdecimeter Resolution Images With Convolutional Neural Networks
2017
IEEE Transactions on Geoscience and Remote Sensing
Semantic labeling (or pixel-level land-cover classification) in ultra-high resolution imagery (< 10cm) requires statistical models able to learn high level concepts from spatial data, with large appearance ...
The proposed full patch labeling CNN outperforms these models by a large margin, also showing a very appealing inference time. ...
ACKNOWLEDGMENTS This work was supported in part by the Swiss National Science Foundation, via the grant 150593 "Multimodal machine learning for remote sensing information fusion" (http://p3.snf.ch/project ...
doi:10.1109/tgrs.2016.2616585
fatcat:vkoklqtwjvcqdl7sbfnunaadpm
Predicting Emotion Labels for Chinese Microblog Texts
[chapter]
2015
Studies in Computational Intelligence
work described in this paper has been supported by the NSFC Overseas, Hong Kong & Macao Scholars Collaborated Researching Fund (61028003)
Acknowledgements The authors would like to thank Ivano Azzini, from ...
use machine learning techniques to establish a model from a large corpus of reviews. ...
Machine learning via supervised classification, on the other hand, is robust to such variety but usually requires hand-labeled training data. ...
doi:10.1007/978-3-319-18458-6_7
fatcat:orzv7zzxhnewrgrilynvmj27wm
Unsupervised Cell Segmentation and Labelling in Neural Tissue Images
2021
Applied Sciences
Although various methods have been developed to automate this task, they tend to make use of single-purpose machine learning models that require extensive training, imposing a significant workload on the ...
A very prominent and useful technique adopted across many different fields is imaging and the analysis of histopathological and fluorescent label tissue samples. ...
Transfer learning allows storing some learned knowledge from a specific task or data and then applying it to new scenarios [21] . ...
doi:10.3390/app11093733
fatcat:ct5d3wtisveytpdvzdgkpo75ne
Detection and Localization of Anomalous Motion in Video Sequences from Local Histograms of Labeled Affine Flows
2017
Frontiers in ICT
This work was partially supported by Région Bretagne (Brittany Council) through a contribution to AB's PhD student grant. ...
The presence of anomalous motion can be detected by deciding that the given motion cannot be fit in a model, which is learned from a set of training data of normal behaviors for a given scenario, computed ...
Let us also stress that from the perspective of the camera, the cyclist looks not that different from a normal pedestrian. ...
doi:10.3389/fict.2017.00010
fatcat:l3g2dllbybhfpia7n7emhsauvi
« Previous
Showing results 1 — 15 out of 2,152 results