Filters








189 Hits in 3.6 sec

Expert Training: Task Hardness Aware Meta-Learning for Few-Shot Classification [article]

Yucan Zhou, Yu Wang, Jianfei Cai, Yu Zhou, Qinghua Hu, Weiping Wang
2020 arXiv   pre-print
Deep neural networks are highly effective when a large number of labeled samples are available but fail with few-shot classification tasks.  ...  Recently, meta-learning methods have received much attention, which train a meta-learner on massive additional tasks to gain the knowledge to instruct the few-shot classification.  ...  Meta-Learning Meta-Learning, referred as learning to learn, is a popular solution for the few-shot classification.  ... 
arXiv:2007.06240v1 fatcat:34qfap2as5bupemk6zdksje3oe

Few-Shot Learning on Graphs: A Survey [article]

Chuxu Zhang, Kaize Ding, Jundong Li, Xiangliang Zhang, Yanfang Ye, Nitesh V. Chawla, Huan Liu
2022 arXiv   pre-print
In light of this, few-shot learning on graphs (FSLG), which combines the strengths of graph representation learning and few-shot learning together, has been proposed to tackle the performance degradation  ...  However, prevailing (semi-)supervised graph representation learning models for specific tasks often suffer from label sparsity issue as data labeling is always time and resource consuming.  ...  For example, Meta-Graph [Bose et al., 2019] investigates few-shot link prediction on different networks (e.g., biological network).  ... 
arXiv:2203.09308v1 fatcat:7tpke435jnevdhdverovyug4sa

Low-resource Learning with Knowledge Graphs: A Comprehensive Survey [article]

Jiaoyan Chen and Yuxia Geng and Zhuo Chen and Jeff Z. Pan and Yuan He and Wen Zhang and Ian Horrocks and Huajun Chen
2021 arXiv   pre-print
appeared in training, and few-shot learning (FSL) where new classes for prediction have only a small number of labeled samples that are available.  ...  In this survey, we very comprehensively reviewed over 90 papers about KG-aware research for two major low-resource learning settings – zero-shot learning (ZSL) where new classes for prediction have never  ...  Feature generating networks for zero-shot learning.  ... 
arXiv:2112.10006v3 fatcat:wkz6gjx4r5gvlhh673p3rqsmgi

Meta-Learning with Context-Agnostic Initialisations [article]

Toby Perrett, Alessandro Masullo, Tilo Burghardt, Majid Mirmehdi, Dima Damen
2020 arXiv   pre-print
Meta-learning approaches have addressed few-shot problems by finding initialisations suited for fine-tuning to target tasks.  ...  First, we report on Omniglot few-shot character classification, using alphabets as context.  ...  (f) After meta-learning, the primary network can be fine-tuned for a new few-shot target task that might not share context with the training set. Fig. 2.  ... 
arXiv:2007.14658v2 fatcat:tt2yhywrmjhfrf237e7ebkbvbq

ZSTAD: Zero-Shot Temporal Activity Detection

Lingling Zhang, Xiaojun Chang, Jun Liu, Minnan Luo, Sen Wang, Zongyuan Ge, Alexander Hauptmann
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
The proposed network is optimized with an innovative loss function that considers the embeddings of activity labels and their superclasses while learning the common semantics of seen and unseen activities  ...  We design an end-to-end deep network based on R-C3D as the architecture for this solution.  ...  Zero-Shot Learning Zero-shot learning (ZSL) is designed to recognize samples of classes that are not seen during training [50, 45, 5, 17] .  ... 
doi:10.1109/cvpr42600.2020.00096 dblp:conf/cvpr/ZhangCLLWGH20 fatcat:ot7h44fa3bbkxnwp3kzfsut5le

Meta-Learning with Adaptive Hyperparameters [article]

Sungyong Baik, Myungsub Choi, Janghoon Choi, Heewon Kim, Kyoung Mu Lee
2020 arXiv   pre-print
The experimental results validate that the Adaptive Learning of hyperparameters for Fast Adaptation (ALFA) is the equally important ingredient that was often neglected in the recent few-shot learning approaches  ...  Instead of searching for better task-aware initialization, we focus on a complementary factor in MAML framework, inner-loop optimization (or fast adaptation).  ...  for few-shot learning.  ... 
arXiv:2011.00209v2 fatcat:xxyot3nwejgjnco22lcacefwfu

IIRC: Incremental Implicitly-Refined Classification [article]

Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, Sarath Chandar
2021 arXiv   pre-print
Moreover, this setup enables evaluating models for some important lifelong learning challenges that cannot be easily addressed under the existing setups.  ...  For example, distillation-based methods perform relatively well but are prone to incorrectly predicting too many labels per image.  ...  Acknowledgments We would like to thank Louis Clouatre and Sanket Vaibhav Mehta for reviewing the paper and for their meaningful feedback.  ... 
arXiv:2012.12477v2 fatcat:rkz5t7pcpbavzmaozvwiifwlh4

A Survey on Neural-symbolic Systems [article]

Dongran Yu, Bo Yang, Dayou Liu, Hui Wang
2021 arXiv   pre-print
In contrast, symbolic systems have exceptional cognitive intelligence through efficient reasoning, but their learning capabilities are poor.  ...  In recent years, neural systems have demonstrated superior perceptual intelligence through highly effective learning, but their reasoning capabilities remain poor.  ...  manual rulemaking and realizing end-to-end knowledge graph reasoning. (3) Few-shot learning and zero-shot learning The main challenge for few-shot learning and zero-shot learning is the shortage of training  ... 
arXiv:2111.08164v1 fatcat:bc33afiitnb73bmjtrfbdgkwpy

Addressing the Stability-Plasticity Dilemma via Knowledge-Aware Continual Learning [article]

Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
2022 arXiv   pre-print
Current methods in continual learning (CL) tend to focus on alleviating catastrophic forgetting of previous tasks.  ...  In this paper, we present Knowledge-Aware coNtinual learner (KAN) that attempts to study the stability-plasticity dilemma to balance CL desiderata in class-IL.  ...  Few methods have been recently proposed for class-IL using a fixed-capacity model. SupSup (Wortsman et al., 2020) learns a mask for each task over a randomly initialized fixed network.  ... 
arXiv:2110.05329v2 fatcat:m3p72kgfb5gnnpmvitwv5xolxy

Integrating Semantic Knowledge to Tackle Zero-shot Text Classification [article]

Jingqing Zhang, Piyawat Lertvittayakumjorn, Yike Guo
2019 arXiv   pre-print
Recognising text documents of classes that have never been seen in the learning stage, so-called zero-shot text classification, is therefore difficult and only limited previous works tackled this problem  ...  Experimental results show that each and the combination of the two phases achieve the best overall accuracy compared with baselines and recent approaches in classifying real-world texts under the zero-shot  ...  Acknowledgments We would like to thank Douglas McIlwraith, Nontawat Charoenphakdee, and three anonymous reviewers for helpful suggestions.  ... 
arXiv:1903.12626v1 fatcat:xkb222r4hrbn5csx7hsdpsgipa

Integrating Semantic Knowledge to Tackle Zero-shot Text Classification

Jingqing Zhang, Piyawat Lertvittayakumjorn, Yike Guo
2019 Proceedings of the 2019 Conference of the North  
Recognising text documents of classes that have never been seen in the learning stage, so-called zero-shot text classification, is therefore difficult and only limited previous works tackled this problem  ...  Acknowledgments We would like to thank Douglas McIlwraith, Nontawat Charoenphakdee, and three anonymous reviewers for helpful suggestions.  ...  zero-shot learning.  ... 
doi:10.18653/v1/n19-1108 dblp:conf/naacl/ZhangLG19 fatcat:pz4oubvlabdnzbbaufwmpdiyr4

ZSTAD: Zero-Shot Temporal Activity Detection [article]

Lingling Zhang, Xiaojun Chang, Jun Liu, Minnan Luo, Sen Wang, Zongyuan Ge, Alexander Hauptmann
2020 arXiv   pre-print
We design an end-to-end deep network based on R-C3D as the architecture for this solution.  ...  Currently, the most effective methods of temporal activity detection are based on deep learning, and they typically perform very well with large scale annotated videos for training.  ...  Zero-Shot Learning Zero-shot learning (ZSL) is designed to recognize samples of classes that are not seen during training [42, 38] .  ... 
arXiv:2003.05583v1 fatcat:b6mujsehnramvmrw3qksykdifa

BREEDS: Benchmarks for Subpopulation Shift [article]

Shibani Santurkar, Dimitris Tsipras, Aleksander Madry
2020 arXiv   pre-print
We then validate that the corresponding shifts are tractable by obtaining human baselines for them.  ...  We develop a methodology for assessing the robustness of models to subpopulation shift---specifically, their ability to generalize to novel data subpopulations that were not observed during training.  ...  Acknowledgements We thank Andrew Ilyas and Sam Park for helpful discussions.  ... 
arXiv:2008.04859v1 fatcat:5pl5t2dscrexvbypfulb4uf7kq

A Survey on Visual Transfer Learning using Knowledge Graphs [article]

Sebastian Monka, Lavdim Halilaj, Achim Rettinger
2022 arXiv   pre-print
Last, we summarize related surveys and give an outlook about challenges and open issues for future research.  ...  Transfer learning is the area of machine learning that tries to prevent these errors.  ...  Moreover, the dataset is used for the task of few-shot learning.  ... 
arXiv:2201.11794v1 fatcat:tapql5h4j5dvrnxjkaxek2cquu

From Big to Small: Adaptive Learning to Partial-Set Domains [article]

Zhangjie Cao, Kaichao You, Ziyang Zhang, Jianmin Wang, Mingsheng Long
2022 arXiv   pre-print
Then, we propose Selective Adversarial Network (SAN and SAN++) with a bi-level selection strategy and an adversarial adaptation mechanism.  ...  Experiments on standard partial-set datasets and more challenging tasks with superclasses show that SAN++ outperforms several domain adaptation methods.  ...  [28] to obtain state-of-the-art performance in both domain adaptation and zero-shot learning.  ... 
arXiv:2203.07375v1 fatcat:rurl4ukmw5d7vmnptu6hngwj5a
« Previous Showing results 1 — 15 out of 189 results