1,203 Hits in 3.8 sec

Zero-Shot Learning for Semantic Utterance Classification [article]

Yann N. Dauphin, Gokhan Tur, Dilek Hakkani-Tur, Larry Heck
2014 arXiv   pre-print
We propose a novel zero-shot learning method for semantic utterance classification (SUC).  ...  The framework uncovers the link between categories and utterances using a semantic space.  ...  Zero-Shot Learning for Semantic Utterance Classification In this section, we introduce a zero-shot learning framework for SUC where none of the classes are seen during training.  ... 
arXiv:1401.0509v3 fatcat:bcri3qreyfdqtm3npsmfat77m4

A Single Example Can Improve Zero-Shot Data Generation [article]

Pavel Burnyshev, Valentin Malykh, Andrey Bout, Ekaterina Artemova, Irina Piontkovskaya
2021 arXiv   pre-print
In the zero-shot approach, the model is trained to generate utterances from seen intents and is further used to generate utterances for intents unseen during training.  ...  Sub-tasks of intent classification, such as robustness to distribution shift, adaptation to specific user groups and personalization, out-of-domain detection, require extensive and flexible datasets for  ...  We provide experimental evidence of a semantic shift when generating utterances for unseen classes using the zero-shot approach; 4.  ... 
arXiv:2108.06991v1 fatcat:v7aupyrpsjhodgseip3s3lqzci

Open Intent Discovery through Unsupervised Semantic Clustering and Dependency Parsing [article]

Pengfei Liu, Youzhang Ning, King Keung Wu, Kun Li, Helen Meng
2021 arXiv   pre-print
In the first stage, we aim to generate a set of semantically coherent clusters where the utterances within each cluster convey the same intent.  ...  We obtain the utterance representation from various pre-trained sentence embeddings and present a metric of balanced score to determine the optimal number of clusters in K-means clustering for balanced  ...  [31] proposed an intent expansion framework using a convolutional deep structured semantic model to generate embeddings for both seen and unseen intents, and thus achieved zero-shot intent classification  ... 
arXiv:2104.12114v2 fatcat:fsvp572yhrej5cfknslxrcbmqi

Coach: A Coarse-to-Fine Approach for Cross-domain Slot Filling [article]

Zihan Liu, Genta Indra Winata, Peng Xu, Pascale Fung
2020 arXiv   pre-print
Our model first learns the general pattern of slot entities by detecting whether the tokens are slot entities or not. It then predicts the specific types for the slot entities.  ...  In addition, we propose a template regularization approach to improve the adaptation robustness by regularizing the representation of utterances based on utterance templates.  ...  Acknowledgments This work is partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government.  ... 
arXiv:2004.11727v1 fatcat:k2xrxo4hsjcadj7kgqvvw7qv6q

A Comprehensive Understanding of Code-mixed Language Semantics using Hierarchical Transformer [article]

Ayan Sengupta, Tharun Suresh, Md Shad Akhtar, Tanmoy Chakraborty
2022 arXiv   pre-print
We further demonstrate the generalizability of the HIT architecture using masked language modeling-based pre-training, zero-shot learning, and transfer learning approaches.  ...  Learning the semantics and morphology of code-mixed language remains a key challenge, due to scarcity of data and unavailability of robust and language-invariant representation learning technique.  ...  Zero-Shot Learning: We also analyze the effectiveness of HIT in zero-shot learning setup.  ... 
arXiv:2204.12753v1 fatcat:vyfayhe5f5gwnklql7mtbokbw4

Automatic Discovery of Novel Intents Domains from Text Utterances [article]

Nikhita Vedula, Rahul Gupta, Aman Alok, Mukund Sridhar
2020 arXiv   pre-print
It learns discriminative deep features to group together utterances and discover multiple latent intent categories within them in an unsupervised manner.  ...  ADVIN significantly outperforms baselines on three benchmark datasets, and real user utterances from a commercial voice-powered agent.  ...  We next develop a network that learns discriminative deep features by maximizing inter-intent variance and minimize intra-intent variance between utterance pairs.  ... 
arXiv:2006.01208v1 fatcat:muaiqdi425hw5bjiibv5u7ddkq

Zero-Shot Personalized Speech Enhancement through Speaker-Informed Model Selection [article]

Aswin Sivaraman, Minje Kim
2021 arXiv   pre-print
This paper presents a novel zero-shot learning approach towards personalized speech enhancement through the use of a sparsely active ensemble model.  ...  Grouping the training set speakers into non-overlapping semantically similar groups is non-trivial and ill-defined.  ...  Our method is zero-shot as the z.  ... 
arXiv:2105.03542v1 fatcat:ntsjw3ty2bat7ninwddh5fjjru

Grounded Adaptation for Zero-shot Executable Semantic Parsing [article]

Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer
2021 arXiv   pre-print
We propose Grounded Adaptation for Zero-shot Executable Semantic Parsing (GAZP) to adapt an existing semantic parser to new environments (e.g. new database schemas).  ...  On the Spider, Sparc, and CoSQL zero-shot semantic parsing tasks, GAZP improves logical form and execution accuracy of the baseline parser.  ...  Sparc, and CoSQL zero-shot semantic parsing backward utterance generator.  ... 
arXiv:2009.07396v3 fatcat:hi23ffdeavam5hyciex6c7oidm

Learning Class-Transductive Intent Representations for Zero-shot Intent Detection [article]

Qingyi Si, Yuanxin Liu, Peng Fu, Zheng Lin, Jiangnan Li, Weiping Wang
2021 arXiv   pre-print
Zero-shot intent detection (ZSID) aims to deal with the continuously emerging intents without annotated training data.  ...  On this basis, we introduce a multi-task learning objective, which encourages the model to learn the distinctions among intents, and a similarity scorer, which estimates the connections among intents more  ...  Class-transductive Zero-shot Learning Classtransductive zero-shot learning utilizes semantic information (typically a textual description) about the unseen classes in the training stage.  ... 
arXiv:2012.01721v2 fatcat:n7zkbx72xnbbbdiceunyg3v52y

XeroAlign: Zero-Shot Cross-lingual Transformer Alignment [article]

Milan Gritta, Ignacio Iacobacci
2021 arXiv   pre-print
Zero-shot methods in particular, often use translated task data as a training signal to bridge the performance gap between the source and target language(s).  ...  XLM-RA's text classification accuracy exceeds that of XLM-R trained with labelled data and performs on par with state-of-the-art models on a cross-lingual adversarial paraphrasing task.  ...  Zero-shot paraphrase detection is another instance of text classification.  ... 
arXiv:2105.02472v2 fatcat:wnttyap5wbblncujqf3nquoeqi

Linguistically-Enriched and Context-Aware Zero-shot Slot Filling [article]

A.B. Siddique, Fuad Jamour, Vagelis Hristidis
2021 arXiv   pre-print
We propose a new zero-shot slot filling neural model, LEONA, which works in three steps.  ...  This setting is commonly referred to as zero-shot slot filling. Little work has focused on this setting, with limited experimental evaluation.  ...  Meta-learning based methods [10, 38, 39] have shown tremendous success for few-shot learning in many tasks such as few-shot image generation [46] , image classification [53] , and domain adaptation  ... 
arXiv:2101.06514v1 fatcat:6xvgetmcwnburorej27h3kmv5q

Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization [article]

Haode Zhang, Haowen Liang, Yuwei Zhang, Liming Zhan, Xiao-Ming Wu, Xiaolei Lu, Albert Y.S. Lam
2022 arXiv   pre-print
We propose two regularizers based on contrastive learning and correlation matrix respectively, and demonstrate their effectiveness through extensive experiments.  ...  It is challenging to train a good intent classifier for a task-oriented dialogue system with only a few annotations.  ...  Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. This research was supported by the grants of HK ITF UIM/377 and PolyU DaSAIL project P0030935 funded by RGC.  ... 
arXiv:2205.07208v1 fatcat:qvwaqkqmyfczlmsui4idanmggm

Zero-Shot Cross-lingual Semantic Parsing [article]

Tom Sherborne, Mirella Lapata
2022 arXiv   pre-print
We remove these assumptions and study cross-lingual semantic parsing as a zero-shot problem, without parallel data (i.e., utterance-logical form pairs) for new languages.  ...  Recent work in cross-lingual semantic parsing has successfully applied machine translation to localize parsers to new languages.  ...  Experimental Setup Semantic Parsing Datasets Our experiments examine whether our zero-shot approach generalizes across languages and domains.  ... 
arXiv:2104.07554v2 fatcat:oxc7i2scxvd7jmbsew34rr5g3i

PartGlot: Learning Shape Part Segmentation from Language Reference Games [article]

Juil Koo, Ian Huang, Panos Achlioptas, Leonidas Guibas, Minhyuk Sung
2022 arXiv   pre-print
We introduce PartGlot, a neural framework and associated architectures for learning semantic part segmentation of 3D shape geometry, based solely on part referential language.  ...  and the listener has to find the target based on this utterance.  ...  Bold indicates the highest mIoU except for the few-shot learning results. Id Method Segmentation mIoU(%) Back Seat Leg Arm Avg. Classif. Acc.(%) PN-Agnostic (Sec. 3.2) vs.  ... 
arXiv:2112.06390v2 fatcat:wvpdbuengjbhphnqfyvzotfvcu

MZET: Memory Augmented Zero-Shot Fine-grained Named Entity Typing [article]

Tao Zhang, Congying Xia, Chun-Ta Lu, Philip Yu
2020 arXiv   pre-print
Named entity typing (NET) is a classification task of assigning an entity mention in the context with given semantic types.  ...  Finally, through the memory component which models the relationship between the entity mention and the entity type, MZET transfer the knowledge from seen entity types to the zero-shot ones.  ...  Acknowledgments We thank the reviewers for their valuable comments. This work is supported in part by NSF under grants III-1763325, III-1909323, and SaTC-1930941.  ... 
arXiv:2004.01267v2 fatcat:3vyc56lierf3pjfrf3bh5ztnea
« Previous Showing results 1 — 15 out of 1,203 results