A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
Partial Least Squares Regression Performs Well in MRI-Based Individualized Estimations
2019
Frontiers in Neuroscience
(multi-label learning). ...
importance for these estimations. ...
on multi-label learning (R = 0.536, compared to R = 0.525 for single-label learning). ...
doi:10.3389/fnins.2019.01282
pmid:31827420
pmcid:PMC6890557
fatcat:v42hqfd5mrffheumamtlo3dcbi
Multi-task Learning Using Multi-modal Encoder-Decoder Networks with Shared Skip Connections
2017
2017 IEEE International Conference on Computer Vision Workshops (ICCVW)
Multi-task learning is a promising approach for efficiently and effectively addressing multiple mutually related recognition tasks. ...
learning. ...
In contrast to such multi-modal singletask learning methods, relatively few studies have been made on multi-modal multi-task learning. Ehrlich et al. ...
doi:10.1109/iccvw.2017.54
dblp:conf/iccvw/KugaKSSM17
fatcat:b6er3vhppbfi7fxorcy5xawnhi
Multi-Instance Dynamic Ordinal Random Fields for Weakly Supervised Facial Behavior Analysis
2018
IEEE Transactions on Image Processing
We propose a Multi-Instance-Learning (MIL) approach for weakly-supervised learning problems, where a training set is formed by bags (sets of feature vectors or instances) and only labels at bag-level are ...
To this end, we propose Multi-Instance Dynamic Ordinal Random Fields (MI-DORF). In this framework, we treat instance-labels as temporally-dependent latent variables in an Undirected Graphical Model. ...
In order to learn the model F from T , it is necessary to incorporate prior knowledge defining the Multi-Instance relation between labels y and latent ordinal states h. ...
doi:10.1109/tip.2018.2830189
pmid:29993690
fatcat:bryu22dktrgxzcbswy6eesqc7y
Overcoming data scarcity with transfer learning
[article]
2017
arXiv
pre-print
Here, we describe and compare three techniques for transfer learning: multi-task, difference, and explicit latent variable architectures. ...
For activation energies of steps in NO reduction, the explicit latent variable method is not only the most accurate, but also enjoys cancellation of errors in functions that depend on multiple tasks. ...
Unlike multi-task learning and explicit latent variables, difference learning cannot be used directly for multi-class classification. ...
arXiv:1711.05099v1
fatcat:nbk535l4cfbtjgwn5xyfckmxse
Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation
[chapter]
2016
Lecture Notes in Computer Science
Zero-Shot Learning (ZSL) promises to scale visual recognition by bypassing the conventional model training requirement of annotated examples for every category. ...
This is achieved by establishing a mapping connecting low-level features and a semantic description of the label space, referred as visual-semantic mapping, on auxiliary data. ...
Kullback-Leibler Importance Estimation Procedure (KLIEP) We first introduce the way to estimate a per-instance auxiliary-data weight given the distribution of target data X te . ...
doi:10.1007/978-3-319-46475-6_22
fatcat:zmwhktds3vfndj6wh364qpsutu
Knowledge Transfer for Multi-labeler Active Learning
[chapter]
2013
Lecture Notes in Computer Science
In this paper, we address multi-labeler active learning, where data labels can be acquired from multiple labelers with various levels of expertise. ...
To solve this problem, we propose a new probabilistic model that transfers knowledge from a rich set of labeled instances in some auxiliary domains to help model labelers' expertise for active learning ...
To the best of our knowledge, our work is the first to leverage transfer learning to help model labelers' expertise for multi-labeler active learning problem. ...
doi:10.1007/978-3-642-40988-2_18
fatcat:dfgsjkoisjb5zmzsm62o3z5kh4
Cross-Domain Multitask Learning with Latent Probit Models
[article]
2012
arXiv
pre-print
We derive theoretical bounds for the estimation error of the classifier in terms of the sparsity of domain transforms. An expectation-maximization algorithm is derived for learning the LPM. ...
We assume the data in multiple tasks are generated from a latent common domain via sparse domain transforms and propose a latent probit model (LPM) to jointly learn the domain transforms, and the shared ...
Introduction There are two basic approaches for analysis of data from two or more tasks, single-task learning (STL) and multi-task learning (MTL). ...
arXiv:1206.6419v1
fatcat:4ggiwwge2fd3hhggj4zylzumtq
To Avoid the Pitfall of Missing Labels in Feature Selection: A Generative Model Gives the Answer
2020
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
In multi-label learning, instances have a large number of noisy and irrelevant features, and each instance is associated with a set of class labels wherein label information is generally incomplete. ...
These missing labels possess two sides like a coin; people cannot predict whether their provided information for feature selection is favorable (relevant) or not (irrelevant) during tossing. ...
., N k (i)) plays an important role in estimating the label observability. ...
doi:10.1609/aaai.v34i04.6127
fatcat:c7mwfflsozckdl7wtwpm3pqmha
User Satisfaction Estimation with Sequential Dialogue Act Modeling in Goal-oriented Conversational Systems
[article]
2022
arXiv
pre-print
In this paper, we propose a novel framework, namely USDA, to incorporate the sequential dynamics of dialogue acts for predicting user satisfaction, by jointly learning User Satisfaction Estimation and ...
User Satisfaction Estimation (USE) is an important yet challenging task in goal-oriented conversational systems. ...
ACKNOWLEDGMENTS This research/paper was supported by the Center for Perceptual and Interactive Intelligence (CPII) Ltd under the Innovation and Technology Commission's InnoHK scheme. ...
arXiv:2202.02912v1
fatcat:a7i2amxrjfh4rezbhinpjwd47y
Combining Generative/Discriminative Learning for Automatic Image Annotation and Retrieval
2012
International Journal of Intelligence Science
Furthermore, we propose a hybrid framework which employs continuous PLSA to model visual features of images in generative learning stage and uses ensembles of classifier chains to classify the multi-label ...
Since the framework combines the advantages of generative and discriminative learning, it can predict semantic annotation precisely for unseen images. ...
Parameters Setting An important parameter of the experiment is the number of latent aspects for the PLSA-based models. ...
doi:10.4236/ijis.2012.23008
fatcat:y7mhhjarejhlpleycxh6fsdwuu
Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction
[article]
2017
arXiv
pre-print
In this paper we propose multi-space variational encoder-decoders, a new model for labeled sequence transduction with semi-supervised learning. ...
The generative model can use neural networks to handle both discrete and continuous latent variables to exploit various features of data. ...
Acknowledgments The authors thank Jiatao Gu, Xuezhe Ma, Zihang Dai and Pengcheng Yin for their helpful discussions. This work has been supported in part by an Amazon Academic Research Award. ...
arXiv:1704.01691v2
fatcat:5w6gxzdg45b4piiogt3xv2gmla
SceneCode: Monocular Dense Semantic Reconstruction using Learned Encoded Scene Representations
[article]
2019
arXiv
pre-print
label estimates for each surface element (depth pixels, surfels, or voxels). ...
Using this learned latent space, we can tackle semantic label fusion by jointly optimising the low-dimenional codes associated with each of a set of overlapping images, producing consistent fused label ...
we can use the learned latent space to integrate multi-view
While [3] encoded only geometry, here we show that we semantic labels, and build a monocular dense SLAM sys-
can extend the same conditional ...
arXiv:1903.06482v2
fatcat:3sjl5x3ovzhadgodxaul4fv47a
Multi-space Variational Encoder-Decoders for Semi-supervised Labeled Sequence Transduction
2017
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
In this paper we propose multi-space variational encoderdecoders, a new model for labeled sequence transduction with semi-supervised learning. ...
The generative model can use neural networks to handle both discrete and continuous latent variables to exploit various features of data. ...
Acknowledgments The authors thank Jiatao Gu, Xuezhe Ma, Zihang Dai and Pengcheng Yin for their helpful discussions. This work has been supported in part by an Amazon Academic Research Award. ...
doi:10.18653/v1/p17-1029
dblp:conf/acl/ZhouN17
fatcat:6u4b6fex5fflzo4mhlu6j44chq
Large-Scale Bayesian Multi-Label Learning via Topic-Based Label Embeddings
2015
Neural Information Processing Systems
We present a scalable Bayesian multi-label learning model based on learning lowdimensional label embeddings. ...
This makes the model particularly appealing for real-world multi-label learning problems where the label matrix is usually very massive but highly sparse. ...
Finally, although not a focus of this paper, some other important aspects of the multi-label learning problem have also been looked at in recent work. ...
dblp:conf/nips/RaiHHC15
fatcat:4esjlcdeh5hrddm7v27fuoq72e
Learning Disentangled Representations with Semi-Supervised Deep Generative Models
[article]
2017
arXiv
pre-print
We further define a general objective for semi-supervised learning in this model class, which can be approximated using an importance sampling procedure. ...
for the remaining variables. ...
This parameter controls the relative weight of the labelled examples relative to the unlabelled examples in the data. ...
arXiv:1706.00400v2
fatcat:havnuwvn65gx7k6t5jv2xaivaa
« Previous
Showing results 1 — 15 out of 42,054 results