Filters








54,719 Hits in 5.0 sec

A flexible class of dependence-aware multi-label loss functions

Eyke Hüllermeier, Marcel Wever, Eneldo Loza Mencia, Johannes Fürnkranz, Michael Rapp
2022 Machine Learning  
With that goal in mind, we introduce a class of loss functions that are able to capture the important aspect of label dependence.  ...  The assessment of multi-label classifiers in terms of these losses is illustrated in an empirical study, clearly showing their aptness at capturing label dependencies.  ...  In this paper, we introduced a flexible class of loss functions that allows for modeling dependence-awareness by means of non-additive measures.  ... 
doi:10.1007/s10994-021-06107-2 fatcat:rdt7rez2bvhbxlkzrxrwvdxpqm

Large-Scale Location-Aware Services in Access: Hierarchical Building/Floor Classification and Location Estimation Using Wi-Fi Fingerprinting Based on Deep Neural Networks

Kyeong Soo Kim, Ruihao Wang, Zhenghang Zhong, Zikun Tan, Haowei Song, Jaehoon Cha, Sanghyuk Lee
2018 Fiber and integrated optics (Print)  
and a feed-forward classifier for multi-label classification with argmax functions to convert multi-label classification results into multi-class classification ones.  ...  One of key technologies for future large-scale location-aware services in access is a scalable indoor localization technique.  ...  of multi-label classification into those of multi-class classification using arg max functions.  ... 
doi:10.1080/01468030.2018.1467515 fatcat:zi32mfmvfrb6bhzr4vkn5rx2ui

Multi-Label Zero-Shot Learning with Transfer-Aware Label Embedding Projection [article]

Meng Ye, Yuhong Guo
2018 arXiv   pre-print
In this paper we propose a transfer-aware embedding projection approach to tackle multi-label zero-shot learning.  ...  Zero-shot learning transfers knowledge from seen classes to novel unseen classes to reduce human labor of labelling data for building new classifiers.  ...  Transfer-Aware Label Embedding Projection Employing the ranking loss to minimize classification error on seen classes can ensure the predictability of the projected label embedding.  ... 
arXiv:1808.02474v1 fatcat:dov2w7ofbvdg3kdfiprkb5sm3i

MlTr: Multi-label Classification with Transformer [article]

Xing Cheng, Hezheng Lin, Xiangyu Wu, Fan Yang, Dong Shen, Zhongyuan Wang, Nian Shi, Honglin Liu
2021 arXiv   pre-print
We put forward a Multi-label Transformer architecture(MlTr) constructed with windows partitioning, in-window pixel attention, cross-window attention, particularly improving the performance of multi-label  ...  The task of multi-label image classification is to recognize all the object labels presented in an image.  ...  Customized Loss Function In multi-label classification, the commonly used loss functions are bce loss and multi-label soft margin loss [27] in the early stage, considering the problem of positive-negative  ... 
arXiv:2106.06195v1 fatcat:2khnsqllsjek7e6f47ooeowamy

Conditional-UNet: A Condition-aware Deep Model for Coherent Human Activity Recognition From Wearables [article]

Liming Zhang
2020 arXiv   pre-print
In this paper, a novel condition-aware deep architecture "Conditional-UNet" is developed to allow dense labeling for Co-HAR problem.  ...  A new problem, so-called "Coherent Human Activity Recognition (Co-HAR)", is more complicated than normal multi-class classification tasks since signals of different movements are mixed and interfered with  ...  Condition-aware multi-label dense classification loss: following the common approach of Maximum Likelihood Estimation (MLE) [21] and similar to dense labeling [11] , we get loss function by factorizing  ... 
arXiv:2004.09376v1 fatcat:ghxb2cmxyrggzaefj77uv625fq

Conditional-UNet: A Condition-aware Deep Model for Coherent Human Activity Recognition From Wearables

Liming Zhang, Wenbin Zhang, Nathalie Japkowicz
2021 2020 25th International Conference on Pattern Recognition (ICPR)  
To explicitly model conditional dependency, a novel conditionaware deep architecture "Conditional-UNet" is developed to allow for multiple dense labeling for Co-HAR.  ...  Basic multi-label classification typically assumes independence within the multiple activities.  ...  Condition-aware multi-label dense classification loss: following the common approach of Maximum Likelihood Estimation (MLE) [21] and similar to dense labeling [12] , we get loss function by factorizing  ... 
doi:10.1109/icpr48806.2021.9412851 fatcat:m6l2toxifbhaldginlx2e74lfe

LiBRe: Label-Wise Selection of Base Learners in Binary Relevance for Multi-label Classification [chapter]

Marcel Wever, Alexander Tornede, Felix Mohr, Eyke Hüllermeier
2020 Lecture Notes in Computer Science  
In multi-label classification (MLC), each instance is associated with a set of class labels, in contrast to standard classification, where an instance is assigned a single label.  ...  In spite of its simplicity, BR proved to be competitive to more sophisticated MLC methods, and still achieves state-of-the-art performance for many loss functions.  ...  The authors also gratefully acknowledge support of this project through computing time provided by the Paderborn Center for Parallel Computing (PC 2 ).  ... 
doi:10.1007/978-3-030-44584-3_44 fatcat:tm3rpqm7sfh4hakagqe6js5upy

Rescuing Deep Hashing from Dead Bits Problem [article]

Shu Zhao, Dayan Wu, Yucan Zhou, Bo Li, Weiping Wang
2021 arXiv   pre-print
The proposed gradient amplifier and error-aware quantization loss are compatible with a variety of deep hashing methods.  ...  A common strategy in these methods is to adopt an activation function, e.g. sigmoid(·) or tanh(·), and minimize a quantization loss to approximate discrete values.  ...  To address this issue, we have proposed a gradient amplifier to detect and rescue the dead bits. Furthermore, an error-aware quantization loss is proposed to alleviate DBP.  ... 
arXiv:2102.00648v1 fatcat:q4phleftzbdaxnynk5qo2m6mty

Cross-dataset Training for Class Increasing Object Detection [article]

Yongqiang Yao, Yan Wang, Yu Guo, Jiaojiao Lin, Hongwei Qin, Junjie Yan
2020 arXiv   pre-print
Given two or more already labeled datasets that target for different object classes, cross-dataset training aims to detect the union of the different classes, so that we do not have to label all the classes  ...  We present a conceptually simple, flexible and general framework for cross-dataset training in object detection.  ...  Different colors in the dataset-aware focal loss imply different classes from merged dataset. label mapping and dataset-aware classification loss, which is a revised version of focal loss in RetinaNet.  ... 
arXiv:2001.04621v1 fatcat:rzqrarzpuzentolvjuiaozmrx4

Instance-Aware Graph Convolutional Network for Multi-Label Classification [article]

Yun Wang, Tong Zhang, Zhen Cui, Chunyan Xu, Jian Yang
2020 arXiv   pre-print
Graph convolutional neural network (GCN) has effectively boosted the multi-label image recognition task by introducing label dependencies based on statistical label co-occurrence of data.  ...  As a whole, two fused branches of sub-networks are involved in the framework: a global branch modeling the whole image and a region-based branch exploring dependencies among regions of interests (ROIs)  ...  So our loss L = L M L + L K L ( 12 ) For the L M L , it is the traditional multi-label loss function for multilabel recognition task.  ... 
arXiv:2008.08407v1 fatcat:5rkjnqwk4vbp3m4jhcbystlbte

Maximum Margin Multi-Label Structured Prediction

Christoph H. Lampert
2011 Neural Information Processing Systems  
In this work we derive a maximum-margin training formulation for multi-label structured prediction that remains computationally tractable while achieving high prediction accuracy.  ...  We study multi-label prediction for structured output sets, a problem that occurs, for example, in object detection in images, secondary structure prediction in computational biology, and graph matching  ...  We can also add further flexibility by a data-independent, but label-dependent term. Note that our setup differs from SSVMs training in this regard.  ... 
dblp:conf/nips/Lampert11 fatcat:bg47i27i6nc6rhvxocv7zh5nai

Shape-aware Semi-supervised 3D Semantic Segmentation for Medical Images [article]

Shuailin Li, Chuyu Zhang, Xuming He
2020 arXiv   pre-print
During training, we introduce an adversarial loss between the predicted SDMs of labeled and unlabeled data so that our network is able to capture shape-aware features more effectively.  ...  To achieve this, we develop a multi-task deep network that jointly predicts semantic segmentation and signed distance map(SDM) of object surfaces.  ...  Our learning loss consists of a multi-task supervised term and an adversarial loss on the SDM predictions. class, which can be viewed as a proxy of confidence measure.  ... 
arXiv:2007.10732v1 fatcat:cfb2ifmgo5barov2onzxhy2aoy

Context-Aware Convolutional Neural Network for Grading of Colorectal Cancer Histology Images [article]

Muhammad Shaban, Ruqayya Awan, Muhammad Moazam Fraz, Ayesha Azam, David Snead, Nasir M. Rajpoot
2019 arXiv   pre-print
We propose a novel way to incorporate larger context by a context-aware neural network based on images with a dimension of 1,792x1,792 pixels.  ...  A comprehensive analysis of some variants of the proposed method is presented.  ...  )+ (1 − α) × L seg (S, S ), (9) where α is a hyper-parameter which defines the contribution of both loss functions in the final loss.  ... 
arXiv:1907.09478v1 fatcat:ctw2rls3pnhw7hh74xfr7ngjou

Meta-Generating Deep Attentive Metric for Few-shot Classification [article]

Lei Zhang, Fei Zhou, Wei Wei, Yanning Zhang
2020 arXiv   pre-print
the task description (eg, a few labelled samples).  ...  Moreover, different from existing methods that utilize an uni-modal weight distribution conditioned on labelled samples for network generation, the proposed meta-learner establishes a multi-modal weight  ...  i )y n,k i . (8) Algorithm We employ the cross entropy loss H(·) between the predicted label and its ground truth as an objective function to train the meta learner in an end-to-end manner as L Ti =  ... 
arXiv:2012.01641v1 fatcat:zmajoooshjg73bvdqce4o7tn4i

Automated Multi-Label Classification based on ML-Plan [article]

Marcel Wever and Felix Mohr and Eyke Hüllermeier
2018 arXiv   pre-print
Second, we show how the scope of ML-Plan, an AutoML-tool for multi-class classification, can be extended towards multi-label classification using MEKA, which is a multi-label extension of the well-known  ...  The resulting approach recursively refines MEKA's multi-label classifiers, which sometimes nest another multi-label classifier, up to the selection of a single-label base learner provided by WEKA.  ...  Second, we adjust the node evaluation function to be based on multi-label loss functions. We now explain these two aspects in more detail.  ... 
arXiv:1811.04060v1 fatcat:t64odebc3rcazgzxqfvmeiksbe
« Previous Showing results 1 — 15 out of 54,719 results