Filters








58,120 Hits in 5.0 sec

Transfer Learning via Multiple Inter-task Mappings [chapter]

Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, Ioannis Vlahavas
2012 Lecture Notes in Computer Science  
Experimental results show that the use of multiple inter-task mappings can significantly boost the performance of transfer learning methodologies, relative to using a single mapping or learning without  ...  In this paper we investigate using multiple mappings for transfer learning in reinforcement learning tasks.  ...  In the absence of such source task instances, no transfer takes place.  ... 
doi:10.1007/978-3-642-29946-9_23 fatcat:jvczkhx6sbacbgesvkbqfkhuda

Transfer learning with probabilistic mapping selection

Anestis Fachantidis, Ioannis Partalas, Matthew E Taylor, Ioannis Vlahavas
2014 Adaptive Behavior  
We especially introduce novel tasks for transfer learning in a realistic simulation of the iCub robot, demonstrating the ability of the method to select mappings in complex tasks where human intuition  ...  Experimental results show that the use of multiple inter-task mappings, accompanied with a probabilistic selection mechanism, can significantly boost the performance of transfer learning relative to 1)  ...  Acknowledgements This work was supported in part by NSF IIS-1149917. We would also like to deeply thank Prof.  ... 
doi:10.1177/1059712314559525 fatcat:x2v6ugdrlvdkth464cmujpqnnq

Transfer via inter-task mappings in policy search reinforcement learning

Matthew E. Taylor, Shimon Whiteson, Peter Stone
2007 Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems - AAMAS '07  
The ambitious goal of transfer learning is to accelerate learning on a target task after training on a different, but related, source task.  ...  In particular, this paper utilizes transfer via inter-task mappings for policy search methods (TVITM-PS) to construct a transfer functional that translates a population of neural network policies trained  ...  This re-search was supported in part by DARPA grant HR0011-04-1-0035, NSF CAREER award IIS-0237699, and NSF award EIA-0303609.  ... 
doi:10.1145/1329125.1329170 dblp:conf/atal/TaylorWS07 fatcat:aiar7g6asvchbiz4qta2v2h6pm

Continual learning via inter-task synaptic mapping

Mao Fubing, Weng Weiwei, Mahardhika Pratama, Edward Yapp Kien Yee
2021 Knowledge-Based Systems  
ISYANA exhibits competitive performance compared to state of the arts. Codes of ISYANA is made available in .  ...  An Inter-Task Synaptic Mapping (ISYANA) is proposed here to underpin knowledge retention for continual learning.  ...  Acknowledgment This research is financially supported by National Research Foundation, Republic of Singapore under IAFPP in the AME domain (contract no.: A19C1A0018).  ... 
doi:10.1016/j.knosys.2021.106947 fatcat:ovaf2wtvordqhnwzr7c4r6paaq

Unsupervised New-set Domain Adaptation with Self-supervised Knowledge

Yunyun Wang, School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China, Guwei Sun, Guoxiang Zhao, Hui Xue
2022 International Journal of Software and Informatics  
Sample contrastive knowledge from the source domain is then transferred to the target domain to assist the learning of class-discriminative features in the target domain.  ...  In such a case, direct transferring the class-discriminative knowledge from the source domain may impair the performance in the target domain and lead to negative transfer.  ...  The learning task is to transfer the related knowledge from these source domains to the target domain and thereby improve the learning performance in the target domain.  ... 
doi:10.21655/ijsi.1673-7288.00269 fatcat:g2d5wqadhfe4ze7lwgoiqne4ja

Adversarial Multi-Source Transfer Learning in Healthcare: Application to Glucose Prediction for Diabetic People [article]

Maxime De Bois, Mounîm A. El Yacoubi, Mehdi Ammi
2020 arXiv   pre-print
The adversarial training framework improves the learning of a general feature representation in a multi-source environment, enhancing the knowledge transfer to an unseen target.  ...  Deep learning has yet to revolutionize general practices in healthcare, despite promising results for some specific tasks.  ...  Sylvie JOANNIDIS for their help in building the IDIAB dataset used in this study.  ... 
arXiv:2006.15940v1 fatcat:tyylr3teyfdhxpif42azxwrfku

Learning Invariant Representation with Consistency and Diversity for Semi-supervised Source Hypothesis Transfer [article]

Xiaodong Wang, Junbao Zhuo, Shuhao Cui, Shuhui Wang
2021 arXiv   pre-print
Semi-supervised domain adaptation (SSDA) aims to solve tasks in target domain by utilizing transferable information learned from the available source domain and a few labeled target data.  ...  The biased model is prone to categorize samples of minority categories into majority ones, resulting in low prediction diversity.  ...  It is worthy of noting that CDL outperforms SHOT++ in 11 transfer tasks over the total 12 tasks.  ... 
arXiv:2107.03008v2 fatcat:rse5gdh6unfe5etka5ze3mjmpa

UM-Adapt: Unsupervised Multi-Task Adaptation Using Adversarial Cross-Task Distillation [article]

Jogendra Nath Kundu, Nishank Lakkakula, R. Venkatesh Babu
2019 arXiv   pre-print
UM-Adapt yields state-of-the-art transfer learning results on ImageNet classification and comparable performance on PASCAL VOC 2007 detection task, even with a smaller backbone-net.  ...  Moreover, the resulting semi-supervised framework outperforms the current fully-supervised multi-task learning state-of-the-art on both NYUD and Cityscapes dataset.  ...  We also thank Google India for the travel grant.  ... 
arXiv:1908.03884v3 fatcat:l7ejobww6jh3xjg5ezkcgt4e6q

Abstraction and Generalization in Reinforcement Learning: A Summary and Framework [chapter]

Marc Ponsen, Matthew E. Taylor, Karl Tuyls
2010 Lecture Notes in Computer Science  
In this paper we survey the basics of reinforcement learning, generalization and abstraction.  ...  We start with an introduction to the fundamentals of reinforcement learning and motivate the necessity for generalization and abstraction.  ...  Marc Ponsen is sponsored by the Interactive Collaborative Information Systems (ICIS) project, supported by the Dutch Ministry of Economic Affairs, grant nr: BSIK03024.  ... 
doi:10.1007/978-3-642-11814-2_1 fatcat:vovcchfhfrcpfat6evfefc54em

Page 50 of Psychological Abstracts Vol. 83, Issue 1 [page]

1996 Psychological Abstracts  
—Examined the spatial relationships that occur in groups of schooling fish (Aphyocharax erithrurus) when confronted with a 180° reversal of a learned spatial discrimination, in absence or in presence of  ...  (U Limburg, Maastricht, Netherlands) Spontaneous response tendencies in noncontingent trials of a matching-to-position task in rats: Consequences for learning the matching and nonmatching task s.  ... 

Unsupervised Transfer Learning with Self-Supervised Remedy [article]

Jiabo Huang, Shaogang Gong
2020 arXiv   pre-print
In this work, we address this problem by transfer clustering that aims to learn a discriminative latent space of the unlabelled target data in a novel domain by knowledge transfer from labelled related  ...  Extensive experiments on four datasets for image clustering tasks reveal the superiority of our model over the state-of-the-art transfer clustering techniques.  ...  Due to the absence of labelling in the target domain, the target distribution is agnostic, so as the discrepancy between it and the source distribution.  ... 
arXiv:2006.04737v1 fatcat:jivttxerg5chhaxo4qdwvtokiq

Continual Contrastive Self-supervised Learning for Image Classification [article]

Zhiwei Lin, Yongtao Wang, Hongxiang Lin
2021 arXiv   pre-print
The burgeoning studies on supervised continual learning have achieved great progress, while the study of catastrophic forgetting in unsupervised learning is still blank.  ...  To improve the visual representation of self-supervised learning, larger and more varied data is needed. In the real world, unlabeled data is generated at all times.  ...  Finally, to strengthen the intra-contrast of old tasks and inter-contrast between previous tasks and current tasks, we propose self-supervise knowledge distillation in 3.3 and the extra sample queue in  ... 
arXiv:2107.01776v2 fatcat:5ihidzgk65hwnjriup4fbbx47m

IRTED-TL: An Inter-Region Tax Evasion Detection Method Based on Transfer Learning

Xulyu Zhu, Zheng Yan, Jianfei Ruan, Qinghua Zheng, Bo Dong
2018 2018 17th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/ 12th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE)  
We exploit evasion-related knowledge in one region and leverage transfer learning techniques to reinforce the tax evasion detection tasks of other regions in which training examples are lacking.  ...  We provide a unified framework that takes advantage of auxiliary data using a transfer learning mechanism and builds an interpretable classifier for inter-region tax evasion detection.  ...  We can construct a tax evasion detection model in the absence of labeled data in one target region using evasion knowledge extracted from other regions by applying transfer learning.  ... 
doi:10.1109/trustcom/bigdatase.2018.00169 dblp:conf/trustcom/ZhuYRZD18 fatcat:qwi467xsxnaefbqg33chqmmbva

Dual Adversarial Adaptation for Cross-Device Real-World Image Super-Resolution [article]

Xiaoqian Xu, Pengxu Wei, Weikai Chen, Mingzhi Mao, Liang Lin, Guanbin Li
2022 arXiv   pre-print
The proposed task is highly challenging due to the absence of paired data from various imaging devices.  ...  InterAA and IntraAA together improve the model transferability from the source domain to the target.  ...  In-terAA and IntraAA together improve the model transferability from the source domain to the target.  ... 
arXiv:2205.03524v1 fatcat:ljmkh2x5urcupc2eawnqixyrju

A Comprehensive Survey of Few-shot Learning: Evolution, Applications, Challenges, and Opportunities [article]

Yisheng Song, Ting Wang, Subrota K Mondal, Jyoti Prakash Sahoo
2022 arXiv   pre-print
Despite the recent creative works in tackling FSL tasks, learning valid information rapidly from just a few or even zero samples still remains a serious challenge.  ...  For the sake of avoiding conceptual confusion, we first elaborate and compare a set of similar concepts including few-shot learning, transfer learning, and meta-learning.  ...  Approach Representation Alignment Fusion Co-Learning Translation  ... 
arXiv:2205.06743v2 fatcat:xmxht2ileja53o2o5b4vrw32ey
« Previous Showing results 1 — 15 out of 58,120 results