A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language Inference
[article]
2020
arXiv
pre-print
We propose to boost the discriminative ability by transferring a natural language inference (NLI) model. ...
More notably, the NLI transfer enables our 10-shot model to perform competitively with 50-shot or even full-shot classifiers, while we can keep the inference time constant by leveraging a faster embedding ...
Acknowledgments This work is supported in part by NSF under grants III-1763325, III-1909323, and SaTC-1930941. ...
arXiv:2010.13009v1
fatcat:s3uuyunrkreapnoyjn2pyfyuui
Discriminative Nearest Neighbor Few-Shot Intent Detection by Transferring Natural Language Inference
2020
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
unpublished
We propose to boost the discriminative ability by transferring a natural language inference (NLI) model. ...
More notably, the NLI transfer enables our 10-shot model to perform competitively with 50-shot or even full-shot classifiers, while we can keep the inference time constant by leveraging a faster embedding ...
Acknowledgments This work is supported in part by NSF under grants III-1763325, III-1909323, and SaTC-1930941. ...
doi:10.18653/v1/2020.emnlp-main.411
fatcat:ijzws2bzqbb43if6jb56n2q3q4
Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning
[article]
2021
arXiv
pre-print
In this work, we focus on a more challenging few-shot intent detection scenario where many intents are fine-grained and semantically similar. ...
We present a simple yet effective few-shot intent detection schema via contrastive pre-training and fine-tuning. ...
Acknowledgements This work is supported in part by NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. We thank the anonymous reviewers for their helpful and thoughtful comments. ...
arXiv:2109.06349v1
fatcat:pndv226l3zbxlfgt4bkgugehfa
CG-BERT: Conditional Text Generation with BERT for Generalized Few-shot Intent Detection
[article]
2020
arXiv
pre-print
In this paper, we formulate a more realistic and difficult problem setup for the intent detection task in natural language understanding, namely Generalized Few-Shot Intent Detection (GFSID). ...
By modeling the utterance distribution with variational inference, CG-BERT can generate diverse utterances for the novel intents even with only a few utterances available. ...
SMOTE only generates new features within the few-shots, while CG-BERT is able to generate diverse examples beyond these five shots by transfer expressions from existing intents. ...
arXiv:2004.01881v1
fatcat:iofdy3sagbdsdmsh7rkzm32yhe
Virtual Augmentation Supported Contrastive Learning of Sentence Representations
[article]
2022
arXiv
pre-print
This challenge is magnified in natural language processing where no general rules exist for data augmentation due to the discrete nature of natural language. ...
Leveraging the large training batch size of contrastive learning, we approximate the neighborhood of an instance via its K-nearest in-batch neighbors in the representation space. ...
Among them, supervised learning on the Natural Language Inference (NLI) datasets (Bowman et al., 2015a; Williams et al., 2017; Wang et al., 2018) has established benchmark transfer learning performance ...
arXiv:2110.08552v2
fatcat:t374n34vsjhh5fqim6q6xodq2e
Knowledge Extraction in Low-Resource Scenarios: Survey and Perspective
[article]
2022
arXiv
pre-print
[He et al., 2021] have regarded nearest neighbors as the augmentation on the language model predictions by using neighbors of the predictions as targets for language model learning, which demonstrated ...
Meta Learning promptly assimilates new knowledge and deduce new classes by learning from few instances, with the ability of "learning to learn", which is naturally suitable for few-shot KE tasks. ...
arXiv:2202.08063v1
fatcat:2q64tx2mzne53gt24adi6ymj7a
New Intent Discovery with Pre-training and Contrastive Learning
[article]
2022
arXiv
pre-print
Extensive experiments on three intent recognition benchmarks demonstrate the high effectiveness of our proposed method, which outperforms state-of-the-art methods by a large margin in both unsupervised ...
New intent discovery aims to uncover novel intent categories from user utterances to expand the set of supported intent classes. ...
of few-shot intent detection. ...
arXiv:2205.12914v1
fatcat:7tm5vjfdn5ajjbxa66v42dkyii
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding
[article]
2021
arXiv
pre-print
VideoCLIP trains a transformer for video and text by contrasting temporally overlapping positive video-text pairs with hard negatives from nearest neighbor retrieval. ...
We present VideoCLIP, a contrastive approach to pre-train a unified model for zero-shot video and text understanding, without using any labels on downstream tasks. ...
This suggests that using a joint backbone for video and text is effective. retrieve k indicates direct searching k nearest neighbors instead of sampling k videos from 2k nearest neighbors (used by VideoCLIP ...
arXiv:2109.14084v2
fatcat:bbv6j5ekcfhg3c5ladvx5ytdae
Improving Semantic Embedding Consistency by Metric Learning for Zero-Shot Classiffication
[chapter]
2016
Lecture Notes in Computer Science
The key contribution of the proposed approach is to control the semantic embedding of images -one of the main ingredients of zero-shot learning -by formulating it as a metric learning problem. ...
The optimized empirical criterion associates two types of sub-task constraints: metric discriminating capacity and accurate attribute prediction. ...
[33] exploit natural language processing technologies to generate event descriptions. ...
doi:10.1007/978-3-319-46454-1_44
fatcat:jvxlw24xazgtll745xgoaw6m7q
Improving Semantic Embedding Consistency by Metric Learning for Zero-Shot Classification
[article]
2016
arXiv
pre-print
The key contribution of the proposed approach is to control the semantic embedding of images -- one of the main ingredients of zero-shot learning -- by formulating it as a metric learning problem. ...
The optimized empirical criterion associates two types of sub-task constraints: metric discriminating capacity and accurate attribute prediction. ...
[33] exploit natural language processing technologies to generate event descriptions. ...
arXiv:1607.08085v1
fatcat:emy4eektarakhebwltkij37gie
SECaps: A Sequence Enhanced Capsule Model for Charge Prediction
[article]
2018
arXiv
pre-print
Nevertheless, most existing works on automatic charge prediction perform adequately on those high-frequency charges but are not yet capable of predicting few-shot charges with lim-ited cases. ...
In addition, we construct our SE-Caps model by making use of seq-caps layer. ...
However, this work cannot handle few-shot problem. Hu et al. [9] propose an attention-based neural model by incorporating several discriminative legal attributes. ...
arXiv:1810.04465v1
fatcat:pbtwqaqh5fclxkamsgqdzcqanq
Fast and Light-Weight Answer Text Retrieval in Dialogue Systems
[article]
2022
arXiv
pre-print
During inference time, only the query needs to be encoded; ANN (approx-imate nearest neighbor) search libraries such as FAISS (Johnson et al., 2017) are used to efficiently search for the most relevant ...
once; also they leverage ANN (approximate nearest neighbor) algorithms to efficiently search for relevant dense vectors. ...
Our work is exploring the efficient and effective approaches of text retrieval on answer text corpus curated by chat-bot administrators. ...
arXiv:2205.14226v2
fatcat:y4tphpneabcjjeimvdenkvj5ky
Open-world Machine Learning: Applications, Challenges, and Opportunities
[article]
2022
arXiv
pre-print
TOP-ID can detect a user's intent automatically in natural language. It does not need any prior knowledge for intent detection. ...
The loss layer detects the known intents from discriminative deep features while LOF detects unknown intents. ...
52312 DBpedia [81] https://wiki.dbpedia.org/datasets EMNIST [77] https://www.nist.gov/itl/products-andservices/emnist-dataset Auslan [65] https://archive.ics.uci.edu/ml/datasets/ Australian +Sign+Language ...
arXiv:2105.13448v2
fatcat:rv6f42sdvvajnhub4uguuhb2cy
CLEAR: Cumulative LEARning for One-Shot One-Class Image Recognition
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Our method exploits transfer learning to model the transformation from a representation of the input, extracted by a Convolutional Neural Network, to a classification decision boundary. ...
This work addresses the novel problem of one-shot oneclass classification. The goal is to estimate a classification decision boundary for a novel class based on a single image example. ...
Tax [35] provided a novel method, called Nearest Neighbor Description (NN-d), for using a Nearest Neighbor classifier to deal with the OCC problem. ...
doi:10.1109/cvpr.2018.00363
dblp:conf/cvpr/KozerawskiT18
fatcat:u62jgmqci5gm7mpg2z7wluxo74
A Review on Text-Based Emotion Detection – Techniques, Applications, Datasets, and Future Directions
[article]
2022
arXiv
pre-print
The field of text-based emotion detection (TBED) is advancing to provide automated solutions to various applications, such as businesses, and finances, to name a few. ...
It produces better predictions compared to a single decision tree. k-NN (k -Nearest Neighbor)k-NN is one of the simplest categories. The nearest neighbor of K is the meaning of k nearest neighbor. ...
the time of the testing when there are a few labeled instances, it is referred as few-shot learning. ...
arXiv:2205.03235v1
fatcat:b3m25fg6xfc3leeym22eqysq5a
« Previous
Showing results 1 — 15 out of 820 results