Filters








50,122 Hits in 3.7 sec

Meta-Transfer Learning through Hard Tasks [article]

Qianru Sun, Yaoyao Liu, Zhaozheng Chen, Tat-Seng Chua, Bernt Schiele
2019 arXiv   pre-print
In this paper, we propose a novel approach called meta-transfer learning (MTL) which learns to transfer the weights of a deep NN for few-shot learning tasks.  ...  In addition, we introduce the hard task (HT) meta-batch scheme as an effective learning curriculum that further boosts the learning efficiency of MTL.  ...  Meta-transfer learning (MTL) is our meta-learning paradigm and hard task (HT) meta-batch is our training strategy.  ... 
arXiv:1910.03648v1 fatcat:l2z7dowb5bclzgr2a3ofk3z2za

Meta-Transfer Learning for Few-Shot Learning

Qianru Sun, Yaoyao Liu, Tat-Seng Chua, Bernt Schiele
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
In addition, we introduce the hard task (HT) meta-batch scheme as an effective learning curriculum for MTL.  ...  In this paper we propose a novel few-shot learning method called meta-transfer learning (MTL) which learns to adapt a deep NN for few shot learning tasks.  ...  Typically, weights are fine-tuned for each task, while we learn a meta-transfer learner through all tasks, which is different in terms of the underlying learning paradigm.  ... 
doi:10.1109/cvpr.2019.00049 dblp:conf/cvpr/SunLCS19 fatcat:d27j662prfglnoarnnz3c5ziy4

Meta-Transfer Learning for Few-Shot Learning [article]

Qianru Sun, Yaoyao Liu, Tat-Seng Chua, Bernt Schiele
2019 arXiv   pre-print
In addition, we introduce the hard task (HT) meta-batch scheme as an effective learning curriculum for MTL.  ...  In this paper we propose a novel few-shot learning method called meta-transfer learning (MTL) which learns to adapt a deep NN for few shot learning tasks.  ...  Typically, weights are fine-tuned for each task, while we learn a meta-transfer learner through all tasks, which is different in terms of the underlying learning paradigm.  ... 
arXiv:1812.02391v3 fatcat:mifgrgaqbramfmk3xshkqqlkla

Warm-starting DARTS using meta-learning [article]

Matej Grobelnik, Joaquin Vanschoren
2022 arXiv   pre-print
Additionally, we employ a simple meta-transfer architecture that was learned over multiple tasks.  ...  In this work, we present a meta-learning framework to warm-start Differentiable architecture search (DARTS).  ...  The authors propose Transferable Neural Architecture Search (T-NAS) based on MAML (Finn et al., 2017) and DARTS, where T-NAS learns a meta-architecture that can be adapted to a new task through only  ... 
arXiv:2205.06355v1 fatcat:7bbldurqpzfmhengh3ozr7tmrm

ETM: Effective Tuning Method based on Multi-objective and Knowledge Transfer in Image Recognition

Weichun Liu, Chenglin Zhao
2021 IEEE Access  
In addition, we improve the efficiency of the above tuning process by transferring knowledge. To do that, we can learn the meta parameters from other small-scale tasks to initialize the agent.  ...  INDEX TERMS Image recognition, machine learning, deep learning, tuning, multi-objective, knowledge transfer. This work is licensed under a Creative Commons Attribution 4.0 License.  ...  knowledge by the meta-learning) and TM (which does not transfer knowledge) methods on 12 target tasks (Table 7 ).  ... 
doi:10.1109/access.2021.3062366 fatcat:qx4wkbh6jjfzzabjxh3pycevny

Expert Training: Task Hardness Aware Meta-Learning for Few-Shot Classification [article]

Yucan Zhou, Yu Wang, Jianfei Cai, Yu Zhou, Qinghua Hu, Weiping Wang
2020 arXiv   pre-print
Recently, meta-learning methods have received much attention, which train a meta-learner on massive additional tasks to gain the knowledge to instruct the few-shot classification.  ...  Inspired by this idea, we propose an easy-to-hard expert meta-training strategy to arrange the training tasks properly, where easy tasks are preferred in the first phase, then, hard tasks are emphasized  ...  To show the effectiveness of our expert training strategy, we compare it with meta-transfer learning (MTL), which propose to train meta-learner with hard tasks (HTs).  ... 
arXiv:2007.06240v1 fatcat:34qfap2as5bupemk6zdksje3oe

LEEP: A New Measure to Evaluate Transferability of Learned Representations [article]

Cuong V. Nguyen, Tal Hassner, Matthias Seeger, Cedric Archambeau
2020 arXiv   pre-print
Our analysis shows that LEEP can predict the performance and convergence speed of both transfer and meta-transfer learning methods, even for small or imbalanced data.  ...  We introduce a new measure to evaluate the transferability of representations learned by classifiers.  ...  Meta-transfer learning. Meta-transfer learning is a framework for learning to transfer from a source task to a target task (Wei et al., 2018b; Sun et al., 2019; Requeima et al., 2019) .  ... 
arXiv:2002.12462v2 fatcat:vl36ctimzjgjhnebt3snudvk2e

Meta-Learning in Neural Networks: A Survey [article]

Timothy Hospedales, Antreas Antoniou, Paul Micaelli, Amos Storkey
2020 arXiv   pre-print
We first discuss definitions of meta-learning and position it with respect to related fields, such as transfer learning and hyperparameter optimization.  ...  Contrary to conventional approaches to AI where tasks are solved from scratch using a fixed learning algorithm, meta-learning aims to improve the learning algorithm itself, given the experience of multiple  ...  Addressing these issues through meta-generalizations of regularization, transfer learning, domain adaptation, and domain generalization are emerging directions [119] .  ... 
arXiv:2004.05439v2 fatcat:3r23tsxxkfbgzamow5miglkrye

Interventional Few-Shot Learning [article]

Zhongqi Yue and Hanwang Zhang and Qianru Sun and Xian-Sheng Hua
2020 arXiv   pre-print
It is worth noting that the contribution of IFSL is orthogonal to existing fine-tuning and meta-learning based FSL methods, hence IFSL can improve all of them, achieving a new 1-/5-shot state-of-the-art  ...  Thanks to it, we propose a novel FSL paradigm: Interventional Few-Shot Learning (IFSL).  ...  Broader Impact The proposed method aims to improve the Few-Shot Learning task.  ... 
arXiv:2009.13000v2 fatcat:atfbrjpz3zhmzj2aow7opnv3su

A Concise Review of Recent Few-shot Meta-learning Methods [article]

Xiaoxu Li and Zhuo Sun and Jing-Hao Xue and Zhanyu Ma
2020 arXiv   pre-print
We conclude this review with some vital current challenges and future prospects in few-shot meta-learning.  ...  Few-shot meta-learning has been recently reviving with expectations to mimic humanity's fast adaption to new concepts based on prior knowledge.  ...  The current few-shot meta-leaning methods try to solve this problem by extracting transferable or shared knowledge, e.g., a global initialization of parameters, from an auxiliary dataset through meta-training  ... 
arXiv:2005.10953v1 fatcat:v54jrpktazf3bfx4kqos4ls27y

GradMix: Multi-source Transfer across Domains and Tasks

Junnan Li, Ziwei Xu, Yongkang Wang, Qi Zhao, Mohan S. Kankanhalli
2020 2020 IEEE Winter Conference on Applications of Computer Vision (WACV)  
While previous works mostly focus on transfer learning from a single source, we study multi-source transfer across domains and tasks (MS-DTT), in a semi-supervised setting.  ...  GradMix follows a meta-learning objective, which assigns layer-wise weights to the source gradients, such that the combined gradient follows the direction that minimize the loss for a small set of samples  ...  [37] propose a meta-transfer learning method to address the few-shot learning task. Ren et al. [31] propose example reweighting in a meta-learning framework.  ... 
doi:10.1109/wacv45572.2020.9093343 dblp:conf/wacv/LiXWZK20 fatcat:2jlnqybw2vbq5psqhb364nfcke

GradMix: Multi-source Transfer across Domains and Tasks [article]

Junnan Li, Ziwei Xu, Yongkang Wong, Qi Zhao, Mohan Kankanhalli
2020 arXiv   pre-print
While previous works mostly focus on transfer learning from a single source, we study multi-source transfer across domains and tasks (MS-DTT), in a semi-supervised setting.  ...  GradMix follows a meta-learning objective, which assigns layer-wise weights to the source gradients, such that the combined gradient follows the direction that minimize the loss for a small set of samples  ...  [37] propose a meta-transfer learning method to address the few-shot learning task. Ren et al. [31] propose example reweighting in a meta-learning framework.  ... 
arXiv:2002.03264v1 fatcat:oycbi6ubsjd4beb4gd6huqeb5u

Meta-learning for Few-shot Natural Language Processing: A Survey [article]

Wenpeng Yin
2020 arXiv   pre-print
The goal of meta-learning is to train a model on a variety of tasks with rich annotations, such that it can solve a new task using only a few labeled samples.  ...  We try to provide clearer definitions, progress summary and some common datasets of applying meta-learning to few-shot NLP.  ...  Meta-learning vs. Transfer learning.  ... 
arXiv:2007.09604v1 fatcat:7w47wpup6fajzfeur63ybgqj6u

Multi-Pair Text Style Transfer on Unbalanced Data [article]

Xing Han, Jessica Lundin
2021 arXiv   pre-print
In this work, we developed a task adaptive meta-learning framework that can simultaneously perform a multi-pair text-style transfer using a single model.  ...  The proposed method can adaptively balance the difference of meta-knowledge across multiple tasks.  ...  To achieve this, we introduce meta-learning into the style-transfer problem. Meta-learning (Schmidhuber, 1987) is a method to enable generalization ability to a model over a distribution of tasks.  ... 
arXiv:2106.10608v1 fatcat:gnoclrbgrnea5oss5odpaujige

A Unified Transferable Model for ML-Enhanced DBMS [article]

Ziniu Wu, Pei Yu, Peilun Yang, Rong Zhu, Yuxing Han, Yaliang Li, Defu Lian, Kai Zeng, Jingren Zhou
2021 arXiv   pre-print
meta knowledge across DBs.  ...  Recently, the database management system (DBMS) community has witnessed the power of machine learning (ML) solutions for DBMS tasks.  ...  This module learns the task-specific knowledge, which can also benefit various DBs through meta-learning.  ... 
arXiv:2105.02418v3 fatcat:ljb66dxlkvhdtbwnam73e5mxm4
« Previous Showing results 1 — 15 out of 50,122 results