Filters








364,882 Hits in 6.6 sec

Weighted Training for Cross-Task Learning [article]

Shuxiao Chen, Koby Crammer, Hangfeng He, Dan Roth, Weijie J. Su
2022 arXiv   pre-print
In this paper, we introduce Target-Aware Weighted Training (TAWT), a weighted training algorithm for cross-task learning based on minimizing a representation-based task distance between the source and  ...  As a byproduct, the proposed representation-based task distance allows one to reason in a theoretically principled way about several critical aspects of cross-task learning, such as the choice of the source  ...  In this paper, we propose Target-Aware Weighted Training (TAWT), a weighted training algorithm for efficient cross-task learning.  ... 
arXiv:2105.14095v2 fatcat:h6trz3ezbfccvmmusmwvbvz6ju

Multi-Task Learning and Weighted Cross-Entropy for DNN-Based Keyword Spotting

Sankaran Panchapagesan, Ming Sun, Aparna Khare, Spyros Matsoukas, Arindam Mandal, Björn Hoffmeister, Shiv Vitaladevuni
2016 Interspeech 2016  
The loss function modifications consist of a combination of multi-task training and weighted cross entropy.  ...  We show that weighted cross-entropy results in additional accuracy improvements.  ...  The loss function modifications consist of a combination of multi-task training and weighted cross entropy.  ... 
doi:10.21437/interspeech.2016-1485 dblp:conf/interspeech/PanchapagesanSK16 fatcat:gc5zgi2jynatbm2r2nuyxk4nyi

Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous Graphs [article]

Dasol Hwang, Jinyoung Park, Sunyoung Kwon, Kyung-Min Kim, Jung-Woo Ha, Hyunwoo J. Kim
2021 arXiv   pre-print
Our proposed method is learning to learn a primary task by predicting meta-paths as auxiliary tasks. This can be viewed as a type of meta-learning.  ...  However, the auxiliary tasks for heterogeneous graphs, which contain rich semantic information with various types of nodes and edges, have less explored in the literature.  ...  Meta cross-validation, i.e., cross-validation for meta-learning, helps to keep weighting function from over-fitting on meta data.  ... 
arXiv:2007.08294v5 fatcat:k2s54yfa45efhiglqs7fndgoy4

Dynamic Deep Multi-task Learning for Caricature-Visual Face Recognition [article]

Zuheng Ming, Jean-Christophe Burie, Muhammad Muzzamil Luqman
2019 arXiv   pre-print
In this paper, we propose dynamic multi-task learning based on deep CNNs for cross-modal caricature-visual face recognition.  ...  Instead of the conventional multi-task learning with fixed weights of the tasks, the proposed dynamic multi-task learning dynamically updates the weights of tasks according to the importance of the tasks  ...  Figure 2 : 2 The proposed multi-task learning framework with dynamic weights of tasks for cross-modal caricature-visual face recognition.  ... 
arXiv:1911.03341v1 fatcat:oq55beyoebf7ncyykqkjbdjvg4

Adaptive Transfer Learning on Graph Neural Networks [article]

Xueting Han, Zhenhuan Huang, Bang An, Jing Bai
2021 arXiv   pre-print
Conventional pre-training methods may be not effective enough on knowledge transfer since they do not make any adaptation for downstream tasks.  ...  In addition, we learn the weighting model through meta-learning.  ...  tasks for 100/50 epochs on Last-FM/Book-Crossing.  ... 
arXiv:2107.08765v2 fatcat:ngnexsimgfezjfrvqhrtl4cslq

Dynamic Deep Multi-task Learning for Caricature-Visual Face Recognition

Zuheng Ming, Jean-Christophe Burie, Muhammad Muzzamil Luqman
2019 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW)  
In this paper, we propose dynamic multi-task learning based on deep CNNs for cross-modal caricature-visual face recognition.  ...  Instead of the conventional multi-task learning with fixed weights of the tasks, the proposed dynamic multi-task learning dynamically updates the weights of tasks according to the importance of the tasks  ...  Figure 2 . 2 The proposed multi-task learning framework with dynamic weights of tasks for cross-modal caricature-visual face recognition.  ... 
doi:10.1109/icdarw.2019.00021 dblp:conf/icdar/MingBL19 fatcat:7m4pf7sqube4dcxfvlhsza6lju

Cross-modal Multi-task Learning for Graphic Recognition of Caricature Face [article]

Zuheng Ming, Jean-Christophe Burie, Muhammad Muzzamil Luqman
2020 arXiv   pre-print
The proposed multi-task learning with dynamic tasks weights enables to appropriately train the hard task and easy task instead of being stuck in the over-training easy task as conventional methods.  ...  The experimental results demonstrate the effectiveness of the proposed dynamic multi-task learning for cross-modal caricature-visual face recognition.  ...  The Fig. 3 : 3 The proposed multi-task learning framework with dynamic weights of tasks for cross-modal caricaturevisual face recognition.  ... 
arXiv:2003.05787v1 fatcat:exeqq7253za2lm5cvhowf6dxv4

A Fast Learning Method for Multilayer Perceptrons in Automatic Speech Recognition Systems

Chenghao Cai, Yanyan Xu, Dengfeng Ke, Kaile Su
2015 Journal of Robotics  
We propose a fast learning method for multilayer perceptrons (MLPs) on large vocabulary continuous speech recognition (LVCSR) tasks.  ...  A back propagation (BP) algorithm that fits the unfolded weight matrices is used to train the restructured MLP, reducing the time complexity of the learning process.  ...  Acknowledgments This work is supported by the Fundamental Research Funds for the Central Universities (YX2014-18), the Beijing Higher Education Young Elite Teacher Project (YETP0768), and the National  ... 
doi:10.1155/2015/797083 fatcat:ytenc3zbhffk5jjyqj5wha6gli

Self-supervised Auxiliary Learning for Graph Neural Networks via Meta-Learning [article]

Dasol Hwang, Jinyoung Park, Sunyoung Kwon, Kyung-Min Kim, Jung-Woo Ha, Hyunwoo J. Kim
2021 arXiv   pre-print
Our method is learning to learn a primary task with various auxiliary tasks to improve generalization performance.  ...  Motivated by recent advances of self-supervision for representation learning in natural language processing and computer vision, self-supervised learning has been recently studied to leverage unlabeled  ...  Within a task, the weighting function can adjust the cross entropy like the focal loss, which focuses on hard examples by decreasing weights for easy samples.  ... 
arXiv:2103.00771v2 fatcat:rrq6lhmtdze4ljr3yikrfwktvi

Combining Domain-Specific Meta-Learners in the Parameter Space for Cross-Domain Few-Shot Classification [article]

Shuman Peng, Weilian Song, Martin Ester
2020 arXiv   pre-print
CosML first trains a set of meta-learners, one for each training domain, to learn prior knowledge (i.e., meta-parameters) specific to each domain.  ...  The domain-specific meta-learners are then combined in the parameter space, by taking a weighted average of their meta-parameters, which is used as the initialization parameters of a task network that  ...  We would also like to thank Compute Canada for providing the computational resources.  ... 
arXiv:2011.00179v1 fatcat:jsuq7dkiojevla3gubcj2a5u44

Deep Cross Residual Learning for Multitask Visual Recognition [article]

Brendan Jou, Shih-Fu Chang
2016 arXiv   pre-print
We show how cross-residual learning (CRL) can be integrated in multitask networks to jointly train and detect visual concepts across several tasks.  ...  We propose a novel extension of residual learning for deep networks that enables intuitive learning across multiple related tasks using cross-connections called cross-residuals.  ...  Acknowledgements We thank our reviewers for their helpful and constructive feedback.  ... 
arXiv:1604.01335v2 fatcat:gtlyqtyjajhd5pfdkosu4gumoy

Better Self-training for Image Classification through Self-supervision [article]

Attaullah Sahito, Eibe Frank, Bernhard Pfahringer
2021 arXiv   pre-print
Recently, self-supervision -- learning without manual supervision by solving an automatically-generated pretext task -- has gained prominence in deep learning.  ...  self-training can greatly improve accuracy, for a modest increase in computation time.  ...  Self-training using cross-entropy outperforms metric learning for all five datasets for both randomly initialised and ImageNet pretrained weights.  ... 
arXiv:2109.00778v2 fatcat:mnn72tn3orebbbyat6xq5jp7vi

Cross-Thought for Sentence Encoder Pre-training [article]

Shuohang Wang, Yuwei Fang, Siqi Sun, Zhe Gan, Yu Cheng, Jing Jiang, Jingjing Liu
2020 arXiv   pre-print
In this paper, we propose Cross-Thought, a novel approach to pre-training sequence encoder, which is instrumental in building reusable sequence embeddings for large-scale NLP tasks such as question answering  ...  Experiments on question answering and textual entailment tasks demonstrate that our pre-trained encoder can outperform state-of-the-art encoders trained with continuous sentence signals as well as traditional  ...  (ii) Our model can be easily finetuned on diverse downstream tasks. The attention weights of the pre-trained cross-sequence Transformers can also be directly used for ranking tasks.  ... 
arXiv:2010.03652v1 fatcat:fb5f7fx4ufgbfce6tj23m6ct3u

Gradual Training Method for Denoising Auto Encoders [article]

Alexander Kalmanovich, Gal Chechik
2015 arXiv   pre-print
Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network.  ...  training on MNIST and CIFAR datasets.  ...  Unsupervised learning for denoising. We first evaluate gradual training in an unsupervised task of image denoising.  ... 
arXiv:1504.02902v1 fatcat:24vyjmzgcbb7rdba6vfbwivkme

Learning Multiple Dense Prediction Tasks from Partially Annotated Data [article]

Wei-Hong Li, Xialei Liu, Hakan Bilen
2022 arXiv   pre-print
pairs, and avoids learning trivial cross-task relations by retaining high-level information about the input image.  ...  We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.  ...  We follow [59] and use fixed loss weights for training all multi-task learning methods, i.e. the loss weight is 1, 2, 10, 5, 50 for semantic segmentation, human parts segmentation, surface normal estimation  ... 
arXiv:2111.14893v3 fatcat:5ij72ybxyzf7jnwn6fhatpb4za
« Previous Showing results 1 — 15 out of 364,882 results