1,492 Hits in 5.3 sec

FLEX: Unifying Evaluation for Few-Shot NLP [article]

Jonathan Bragg, Arman Cohan, Kyle Lo, Iz Beltagy
2021 arXiv   pre-print
Following the principles, we release the FLEX benchmark, which includes four few-shot transfer settings, zero-shot evaluation, and a public leaderboard that covers diverse NLP tasks.  ...  In addition, we present UniFew, a prompt-based model for few-shot learning that unifies pretraining and finetuning prompt formats, eschewing complex machinery of recent prompt-based approaches in adapting  ...  and feedback.  ... 
arXiv:2107.07170v2 fatcat:yioumxf2tfebdmzbesz5kriqti

Meta-Learning to Detect Rare Objects

Yu-Xiong Wang, Deva Ramanan, Martial Hebert
2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
We develop a conceptually simple but powerful meta-learning based framework that simultaneously tackles few-shot classification and few-shot localization in a unified, coherent way.  ...  While most of existing work has focused on fewshot classification, we make a step towards few-shot object detection, a more challenging yet under-explored task.  ...  We thus simultaneously address few-shot classification and few-shot localization in a unified way, extending the sole classification in [70] .  ... 
doi:10.1109/iccv.2019.01002 dblp:conf/iccv/WangRH19 fatcat:h754foa5fzfcdjd5gcaspkplxa

Self-Supervised Prototypical Transfer Learning for Few-Shot Classification [article]

Carlos Medina, Arnout Devos, Matthias Grossglauser
2020 arXiv   pre-print
Building on these insights and on advances in self-supervised learning, we propose a transfer learning approach which constructs a metric embedding that clusters unlabeled prototypical samples and their  ...  Recently, unsupervised meta-learning methods have exchanged the annotation requirement for a reduction in few-shot classification performance.  ...  Acknowledgments and Disclosure of Funding This work received support from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 754354  ... 
arXiv:2006.11325v1 fatcat:b5fpxjbdgjfj3ltvkhoduv3pum

Few-Shot Human Motion Prediction via Meta-learning [chapter]

Liang-Yan Gui, Yu-Xiong Wang, Deva Ramanan, José M. F. Moura
2018 Lecture Notes in Computer Science  
This paper addresses the problem of few-shot human motion prediction, in the spirit of the recent progress on few-shot learning and meta-learning.  ...  To accomplish this, we propose proactive and adaptive meta-learning (PAML) that introduces a novel combination of model-agnostic meta-learning and model regression networks and unifies them into an integrated  ...  Few-Shot Human Motion Prediction via Meta-Learning  ... 
doi:10.1007/978-3-030-01237-3_27 fatcat:lq47law36ffuxmidzb6pypovqi

Few-Shot Learning with Localization in Realistic Settings [article]

Davis Wertheimer, Bharath Hariharan
2019 arXiv   pre-print
We show that prior methods designed for few-shot learning do not work out of the box in these challenging conditions, based on a new "meta-iNat" benchmark.  ...  Traditional recognition methods typically require large, artificially-balanced training classes, while few-shot learning methods are tested on artificially small ones.  ...  Acknowledgements This work was partly funded by a grant from Aricent.  ... 
arXiv:1904.08502v2 fatcat:4t7skgkfjncmjedp5ztoaqiwua

Learning Adaptive Classifiers Synthesis for Generalized Few-Shot Learning [article]

Han-Jia Ye, Hexiang Hu, De-Chuan Zhan
2021 arXiv   pre-print
Class-balanced many-shot learning and few-shot learning tackle one side of this problem, by either learning strong classifiers for head or learning to learn few-shot classifiers for the tail.  ...  In this paper, we investigate the problem of generalized few-shot learning (GFSL) -- a model during the deployment is required to learn about tail categories with few shots and simultaneously classify  ...  Our approach takes advantage of the neural dictionary to learn bases for composing many-shot and few-shot classifiers via a unified learning objective, which transfers the knowledge from seen to unseen  ... 
arXiv:1906.02944v5 fatcat:gwl55e4cpbdr7cy6bopmxyw5ye

Meta-Transfer Learning through Hard Tasks [article]

Qianru Sun, Yaoyao Liu, Zhaozheng Chen, Tat-Seng Chua, Bernt Schiele
2019 arXiv   pre-print
In this paper, we propose a novel approach called meta-transfer learning (MTL) which learns to transfer the weights of a deep NN for few-shot learning tasks.  ...  We conduct few-shot learning experiments and report top performance for five-class few-shot recognition tasks on three challenging benchmarks: miniImageNet, tieredImageNet and Fewshot-CIFAR100 (FC100).  ...  It is also partially supported by German Research Foundation (DFG CRC 1223), and National Natural Science Foundation of China (61772359).  ... 
arXiv:1910.03648v1 fatcat:l2z7dowb5bclzgr2a3ofk3z2za

Few-shot learning for medical text: A systematic review [article]

Yao Ge, Yuting Guo, Yuan-Chi Yang, Mohammed Ali Al-Garadi, Abeed Sarker
2022 arXiv   pre-print
Objective: Few-shot learning (FSL) methods require small numbers of labeled instances for training.  ...  Twenty-one (68%) studies reconstructed existing datasets to create few-shot scenarios synthetically, and MIMIC-III was the most frequently used dataset (7/31; 23%).  ...  and the lack of unified benchmarks in few-shot NLP. 11 Attaining high machine learning performances has also been challenging in fewshot settings.  ... 
arXiv:2204.14081v1 fatcat:ageqcud25fh3xeuctrgeqytmhe

Simple and Effective Few-Shot Named Entity Recognition with Structured Nearest Neighbor Learning [article]

Yi Yang, Arzoo Katiyar
2020 arXiv   pre-print
We present a simple few-shot named entity recognition (NER) system based on nearest neighbor learning and structured inference.  ...  Across several test domains, we show that a nearest neighbor classifier in this feature-space is far more effective than the standard meta-learning approaches.  ...  Experiments In this section, we compare STRUCTSHOT against existing methods on two few-shot NER scenarios: tag set extension and domain transfer.  ... 
arXiv:2010.02405v1 fatcat:zppqpuyo7zfsnls5a4xfgiz6ri

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding [article]

Subhabrata Mukherjee, Xiaodong Liu, Guoqing Zheng, Saghar Hosseini, Hao Cheng, Greg Yang, Christopher Meek, Ahmed Hassan Awadallah, Jianfeng Gao
2021 arXiv   pre-print
Finally, we discuss several principles and choices in designing the experimental settings for evaluating the true few-shot learning performance and suggest a unified standardized approach to few-shot learning  ...  That has motivated a line of work that focuses on improving few-shot learning performance of NLU models.  ...  performance of and comparing different few-shot learning approaches [22] .  ... 
arXiv:2111.02570v1 fatcat:xkapvzlmtnawdn2kb22yhvvije

Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start [article]

Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher, Caiming Xiong
2020 arXiv   pre-print
We demonstrate that this framework enables a pretrained entailment model to work well on new entailment domains in a few-shot setting, and show its effectiveness as a unified solver for several downstream  ...  In this work, we introduce Universal Few-shot textual Entailment (UFO-Entail).  ...  Acknowledgments The authors would like to thank the anonymous reviewers for insightful comments and suggestions.  ... 
arXiv:2010.02584v1 fatcat:uyhaox2yljaj5bxwnfubmbbxkq

Learning From Multiple Experts: Self-paced Knowledge Distillation for Long-tailed Classification [article]

Liuyu Xiang, Guiguang Ding, Jungong Han
2020 arXiv   pre-print
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.  ...  Specifically, the proposed framework involves two levels of adaptive learning schedules: Self-paced Expert Selection and Curriculum Instance Selection, so that the knowledge is adaptively transferred to  ...  However, different from few-shot learning algorithms, we mainly focus on learning a continuous spectrum of data distribution jointly, rather than focus solely on the few-shot classes.  ... 
arXiv:2001.01536v3 fatcat:eylufjco7nbohetdqoazcsxpfq

Disentangled Feature Representation for Few-shot Image Classification [article]

Hao Cheng, Yufei Wang, Haoliang Li, Alex C. Kot, Bihan Wen
2021 arXiv   pre-print
We conducted extensive experiments to evaluate the proposed DFR on general and fine-grained few-shot classification, as well as few-shot domain generalization, using the corresponding four benchmarks,  ...  Furthermore, we propose a novel FS-DomainNet dataset based on DomainNet, for benchmarking the few-shot domain generalization tasks.  ...  classification, fine-grained classification, and domain generalization) under the few-shot settings, evaluate the effectiveness of the proposed DFR framework.  ... 
arXiv:2109.12548v1 fatcat:4zsghy7om5h6lkcl6enhxbin5a

Binocular Mutual Learning for Improving Few-shot Classification [article]

Ziqi Zhou, Xi Qiu, Jiangtao Xie, Jianan Wu, Chi Zhang
2021 arXiv   pre-print
Most of the few-shot learning methods learn to transfer knowledge from datasets with abundant labeled data (i.e., the base set).  ...  meta-tasks within few classes in a local view.  ...  More Benchmarks To further verify the performance of BML, we do experiments on another public few-shot classification benchmark: FC100.  ... 
arXiv:2108.12104v1 fatcat:h6ewfw6yc5d2lnusesdjyxdplu

Meta Learning for Natural Language Processing: A Survey [article]

Hung-yi Lee, Shang-Wen Li, Ngoc Thang Vu
2022 arXiv   pre-print
This paper first introduces the general concepts of meta-learning and the common approaches.  ...  Meta-learning is an arising field in machine learning studying approaches to learn better learning algorithms.  ...  Challenges Learn-to-init is an essential paradigm for few-shot learning and usually achieves outstanding results in the few-shot learning benchmarks of image classification (Triantafillou et al., 2020  ... 
arXiv:2205.01500v1 fatcat:lpyrqh7njfdtnccbxp7kghpeti
« Previous Showing results 1 — 15 out of 1,492 results