Filters








8,646 Hits in 4.2 sec

Diverse Few-Shot Text Classification with Multiple Metrics [article]

Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, Bowen Zhou
2018 arXiv   pre-print
seen few-shot task.  ...  We study few-shot learning in natural language domains.  ...  The proposed method can use multiple metrics, and performs significantly better compared to previous single-metric methods when the few-shot tasks come from diverse domains.  ... 
arXiv:1805.07513v1 fatcat:74owgnvspzeuhiljvjuppucv6u

Few-shot Learning with Meta Metric Learners [article]

Yu Cheng, Mo Yu, Xiaoxiao Guo, Bowen Zhou
2019 arXiv   pre-print
Existing meta-learning or metric-learning based few-shot learning approaches are limited in handling diverse domains with various number of labels.  ...  We test our approach in the 'k-shot N-way' few-shot learning setting used in previous work and new realistic few-shot setting with diverse multi-domain tasks and flexible label numbers.  ...  First, we will focus on selecting the data from related domains/resources to support the training of meta metric learners.  ... 
arXiv:1901.09890v1 fatcat:ssekfocxqzdkle6ie7ltw3r34i

MGIMN: Multi-Grained Interactive Matching Network for Few-shot Text Classification [article]

Jianhai Zhang, Mieradilijiang Maimaiti, Xing Gao, Yuanhang Zheng, Ji Zhang
2022 arXiv   pre-print
Text classification struggles to generalize to unseen classes with very few labeled text instances per class.  ...  They also ignore the importance to capture the inter-dependency between query and the support set for few-shot text classification.  ...  Generalized FSL In most studies of text classification (Bao et al., 2019; with few-shot manner, N-way K-shot accuracy is the standard evaluation metric.  ... 
arXiv:2204.04952v3 fatcat:g6zgletaxjgqdbt4g5gutxth24

Improving Zero and Few-shot Generalization in Dialogue through Instruction Tuning [article]

Prakhar Gupta, Cathy Jiao, Yi-Ting Yeh, Shikib Mehri, Maxine Eskenazi, Jeffrey P. Bigham
2022 arXiv   pre-print
We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks.  ...  We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue  ...  The INSTRUCTDIAL repository consists of multiple dialogue tasks converted into a text-to-text format. In particular, we include dialogue generation, classification, and evaluation tasks.  ... 
arXiv:2205.12673v1 fatcat:5p6smvfnqfajfleiovs6onic6q

Meta-learning for Few-shot Natural Language Processing: A Survey [article]

Wenpeng Yin
2020 arXiv   pre-print
Few-shot natural language processing (NLP) refers to NLP tasks that are accompanied with merely a handful of labeled examples. This is a real-world challenge that an AI system must learn to handle.  ...  Nevertheless, this paper focuses on NLP domain, especially few-shot applications.  ...  Following the routine of metric-based meta learning, learned multiple metrics for loss update few-shot text classification problems.  ... 
arXiv:2007.09604v1 fatcat:7w47wpup6fajzfeur63ybgqj6u

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding [article]

Subhabrata Mukherjee, Xiaodong Liu, Guoqing Zheng, Saghar Hosseini, Hao Cheng, Greg Yang, Christopher Meek, Ahmed Hassan Awadallah, Jianfeng Gao
2021 arXiv   pre-print
Finally, we discuss several principles and choices in designing the experimental settings for evaluating the true few-shot learning performance and suggest a unified standardized approach to few-shot learning  ...  That has motivated a line of work that focuses on improving few-shot learning performance of NLU models.  ...  Evaluation Metric We evaluate a model M in the few-shot setting with access to the task description along with a few labeled examples k ∈ {10, 20, 30}.  ... 
arXiv:2111.02570v1 fatcat:xkapvzlmtnawdn2kb22yhvvije

Geometric Generalization Based Zero-Shot Learning Dataset Infinite World: Simple Yet Powerful [article]

Rajesh Chidambaram, Michael Kampffmeyer, Willie Neiswanger, Xiaodan Liang, Thomas Lachmann, Eric Xing
2018 arXiv   pre-print
We systematically analyze state-of-the-art model's internal consistency, identify their bottlenecks and propose a pro-active optimization method for few-shot and zero-shot learning.  ...  In the process, we introduce Infinite World, an evaluable, scalable, multi-modal, light-weight dataset and Zero-Shot Intelligence Metric ZSI.  ...  Meta Networks uses a partially proactive and as well as a reactive optimizer for few-shot image classification.  ... 
arXiv:1807.03711v2 fatcat:c4vjd4zlt5eyzjbliocahohtui

Active Few-Shot Learning with FASL [article]

Thomas Müller and Guillermo Pérez-Torró and Angelo Basile and Marc Franco-Salvador
2022 arXiv   pre-print
Recent advances in natural language processing (NLP) have led to strong text classification models for many tasks.  ...  This is relevant as in a few-shot setup we do not have access to a large validation set.  ...  Few-shot learning (FSL) is the problem of learning classifiers with only few training examples.  ... 
arXiv:2204.09347v2 fatcat:5k32sysg3zbnxlljfhqjzbu3jm

CG-BERT: Conditional Text Generation with BERT for Generalized Few-shot Intent Detection [article]

Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, Philip Yu
2020 arXiv   pre-print
By modeling the utterance distribution with variational inference, CG-BERT can generate diverse utterances for the novel intents even with only a few utterances available.  ...  To approach this problem, we propose a novel model, Conditional Text Generation with BERT (CG-BERT).  ...  Recently, some few-shot learning studies are presented with a special focus on few-shot text classification problems.  ... 
arXiv:2004.01881v1 fatcat:iofdy3sagbdsdmsh7rkzm32yhe

Few-Shot Charge Prediction with Data Augmentation and Feature Augmentation

Peipeng Wang, Xiuguo Zhang, Zhiying Cao
2021 Applied Sciences  
Therefore, we propose a model with data augmentation and feature augmentation for few-shot charge prediction.  ...  Specifically, the model takes the text description as the input and uses the Mixup method to generate virtual samples for data augmentation.  ...  [13] proposed a two-step framework that combines multiple semantic knowledges to solve the zero-shot text classification. Geng et al.  ... 
doi:10.3390/app112210811 fatcat:j7h3udh2zjfy3ojhpchh7t2bbi

Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation [article]

Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari
2022 arXiv   pre-print
in both zero/few-shot settings.  ...  We can also sample diverse content representations from the content space without accessing data of the seen tasks, and fuse them with the representations of novel tasks for generating diverse texts in  ...  Experiments We evaluate our approach on continual few/zero shot text classification and low-resource text style transfer. Continual Zero/Few-shot Learning Setting.  ... 
arXiv:2202.13363v2 fatcat:e6tjmp4hcjbpxdl2qrsytvthjq

Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation [article]

Jixuan Wang, Kuan-Chieh Wang, Frank Rudzicz, Michael Brudno
2022 arXiv   pre-print
In this work, we propose a novel conditional neural process-based approach for few-shot text classification that learns to transfer from other diverse tasks with rich annotation.  ...  Experimental results show that our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches on a collection of diverse few-shot tasks.  ...  In this work we develop a method to improve a large pre-trained LM for few-shot text classification problems by transferring from multiple source tasks.  ... 
arXiv:2201.11576v1 fatcat:jtkgzlzk2jfpdkvsvwtrubm4ve

Universal Natural Language Processing with Limited Annotations: Try Few-shot Textual Entailment as a Start [article]

Wenpeng Yin, Nazneen Fatema Rajani, Dragomir Radev, Richard Socher, Caiming Xiong
2020 arXiv   pre-print
In this work, we introduce Universal Few-shot textual Entailment (UFO-Entail).  ...  We demonstrate that this framework enables a pretrained entailment model to work well on new entailment domains in a few-shot setting, and show its effectiveness as a unified solver for several downstream  ...  In the language domain, combine multiple metrics learned from diverse clusters of training tasks for an unseen few-shot text classification task.  ... 
arXiv:2010.02584v1 fatcat:uyhaox2yljaj5bxwnfubmbbxkq

Learning to Few-Shot Learn Across Diverse Natural Language Classification Tasks [article]

Trapit Bansal, Rishikesh Jha, Andrew McCallum
2020 arXiv   pre-print
Across 17 NLP tasks, including diverse domains of entity typing, natural language inference, sentiment analysis, and several other text classification tasks, we show that LEOPARD learns better initial  ...  We develop a novel method, LEOPARD, which enables optimization-based meta-learning across tasks with different number of classes, and evaluate different methods on generalization to diverse NLP classification  ...  inference, sentiment classification, and various other text classification tasks; (4) we study how metalearning, multi-task learning and fine-tuning perform for few-shot learning of completely new tasks  ... 
arXiv:1911.03863v3 fatcat:7bppnqaqirfy7a3b2lekqxzp7i

Few-shot learning for medical text: A systematic review [article]

Yao Ge, Yuting Guo, Yuan-Chi Yang, Mohammed Ali Al-Garadi, Abeed Sarker
2022 arXiv   pre-print
Objective: Few-shot learning (FSL) methods require small numbers of labeled instances for training.  ...  Concept extraction/named entity recognition was the most frequently addressed task (13/31; 42%), followed by text classification (10/31; 32%).  ...  Few-shot text classification 10 / 10 31 studies (33%) focused on few-shot classification, with half of them involving multi-label text 23/36 Table 1 continued from previous page Size of Number of Entity  ... 
arXiv:2204.14081v1 fatcat:ageqcud25fh3xeuctrgeqytmhe
« Previous Showing results 1 — 15 out of 8,646 results