Filters








2,598 Hits in 6.5 sec

Second-order contexts from lexical substitutes for few-shot learning of word representations

Qianchu Liu, Diana McCarthy, Anna Korhonen
2019 Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*  
In this paper, we focus on few-shot learning of emerging concepts that fully exploits only a few available contexts.  ...  Previous context-based approaches to modelling unseen words only consider bag-of-word firstorder contexts, whereas our method aggregates contexts as second-order substitutes that are produced by a sequence-aware  ...  Acknowledgments We acknowledge Peterhouse College at University of Cambridge for funding Qianchu Liu's PhD research.  ... 
doi:10.18653/v1/s19-1007 dblp:conf/starsem/LiuMK19 fatcat:3sdyzbzzgff3fgujoh7la2tg44

Few-shot Text Classification with Distributional Signatures [article]

Yujia Bao, Menghua Wu, Shiyu Chang, Regina Barzilay
2020 arXiv   pre-print
In this paper, we explore meta-learning for few-shot text classification.  ...  Our model is trained within a meta-learning framework to map these signatures into attention scores, which are then used to weight the lexical representations of words.  ...  used to weight the lexical representations of words.  ... 
arXiv:1908.06039v3 fatcat:bbddbkpop5gynaloacfxnuib3q

English Out-of-Vocabulary Lexical Evaluation Task [article]

Han Wang, Ye Wang, Xinxiang Zhang, Mi Lu, Yoonsuck Choe, Jingjing Cao
2019 arXiv   pre-print
The OOV words are words that only appear in test samples. The goal of tasks is to provide solutions for OOV lexical classification and prediction.  ...  The tasks require annotators to conclude the attributes of the OOV words based on their related contexts.  ...  Second, most of the test words and the candidates in lexical substitution tasks such as [12] are daily words.  ... 
arXiv:1804.04242v3 fatcat:2h35jy5im5htbeqz76ejwu2qve

Siamese recurrent networks learn first-order logic reasoning and exhibit zero-shot compositional generalization [article]

Mathijs Mul, Willem Zuidema
2019 arXiv   pre-print
Can neural nets learn logic?  ...  We approach this classic question with current methods, and demonstrate that recurrent neural networks can learn to recognize first order logical entailment relations between expressions.  ...  In the last zero-shot learning experiment, we replace sets of nouns instead of single words, in order to assess the flexibility of the relational semantics that our networks have learned.  ... 
arXiv:1906.00180v1 fatcat:tf4i45rvprfmpm3mzljoyqccwa

Word Frequency Does Not Predict Grammatical Knowledge in Language Models [article]

Charles Yu, Ryan Sie, Nico Tedeschi, Leon Bergen
2020 arXiv   pre-print
Finally, we find that a novel noun's grammatical properties can be few-shot learned from various types of training data.  ...  Neural language models learn, to varying degrees of accuracy, the grammatical properties of natural languages.  ...  Acknowledgements We thank the Google Cloud Platform research program for support. The Titan V used for this research was donated by the NVIDIA Corporation.  ... 
arXiv:2010.13870v1 fatcat:sj5ubiiqorevtoxpxaylrfzhpm

ON KNOWING A WORD

George A. Miller
1999 Annual Review of Psychology  
A person who knows a word knows much more than its meaning and pronunciation. The contexts in which a word can be used to express a particular meaning are a critical component of word knowledge.  ...  are able to identify the intended meanings of common polysemous words.  ...  Unfortunately, learning words from context is a slow process. Many contexts of use must be encountered before a new word is mastered, so extensive reading is required for a large vocabulary.  ... 
doi:10.1146/annurev.psych.50.1.1 pmid:15012457 fatcat:dawazqvddvgivbufyinujuomau

Which Statistics Reflect Semantics? Rethinking Synonymy and Word Similarity [chapter]

Derrick Higgins
2005 Studies in Generative Grammar  
Overview A great deal of work has been done of late on the statistical modeling of word similarity relations (cf.  ...  Acknowledgements I would like to thank the conference organizers for providing an open forum for discussion, and my ETS colleagues for their helpful comments on an earlier draft of this paper.  ...  Any opinions expressed here are those of the author, and not necessarily of Educational Testing Service.  ... 
doi:10.1515/9783110197549.265 fatcat:plxpqfcdbjf6xijkw7kng7s4fu

On syllable structure and phonological variation: The case of i-epenthesis by Brazilian Portuguese learners of English

Paul John, Walcir Cardoso
2017 Ilha do Desterro  
representations for single lexical items.  ...  dual underlying representations which compete for selection at the moment of speaking.  ...  First and foremost, we would like to thank our Brazilian collaborator, Léa Cardoso, for opening the doors of her school and for her active involvement in many aspects of this research, including her assistance  ... 
doi:10.5007/2175-8026.2017v70n3p169 fatcat:3pb5fenatvgprc4pm7ymyx6dui

Unsupervised Distillation of Syntactic Information from Contextualized Word Representations [article]

Shauli Ravfogel, Yanai Elazar, Jacob Goldberger, Yoav Goldberg
2021 arXiv   pre-print
Finally, we demonstrate the utility of our distilled representations by showing that they outperform the original contextualized representations in a few-shot parsing setting.  ...  In this work, we tackle the task of unsupervised disentanglement between semantics and structure in neural language representations: we aim to learn a transformation of the contextualized vectors, that  ...  Acknowledgments We would like to thank Gal Chechik for providing valuable feedback on early version of this work.  ... 
arXiv:2010.05265v2 fatcat:ejfjlke7czeuphg73lx4vjiasa

Analyzing machine-learned representations: A natural language case study [article]

Ishita Dasgupta, Demi Guo, Samuel J. Gershman, Noah D. Goodman
2019 arXiv   pre-print
We find that these systems can learn abstract rules and generalize them to new contexts under certain circumstances -- similar to human zero-shot reasoning.  ...  In this work, we study representations of sentences in one such artificial system for natural language processing.  ...  Acknowledgements We are grateful to Anatole Gershman, Tim O'Donnell, Joshua Greene and Fiery Cushman for helpful discussions. ID is supported by Microsoft Research.  ... 
arXiv:1909.05885v1 fatcat:wo7z5woybfdvhe732gmc452soy

Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data [article]

Lisa Anne Hendricks, Subhashini Venugopalan, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Trevor Darrell
2016 arXiv   pre-print
Our results show that DCC has distinct advantages over existing image and video captioning approaches for generating descriptions of new objects in context.  ...  objects in context.  ...  Marcus Rohrbach was supported by a fellowship within the FITweltweit-Program of the German Aca-  ... 
arXiv:1511.05284v2 fatcat:kjvewbdvybdhjbgo4jf27w7ote

A Survey On Neural Word Embeddings [article]

Erhan Sezerer, Selma Tekir
2021 arXiv   pre-print
The revolutionary idea of distributed representation for a concept is close to the working of a human mind in that the meaning of a word is spread across several neurons, and a loss of activation will  ...  The study of meaning in natural language processing (NLP) relies on the distributional hypothesis where language elements get meaning from the words that co-occur within contexts.  ...  [136] propose a neural representation learning model for predicting different types of lexical relations, e.g., hypernymy, synonymy, meronymy, etc.  ... 
arXiv:2110.01804v1 fatcat:rfxwasxwivdvzn6iukbpvvmnai

Effect of lexical cues on the production of active and passive sentences in Broca's and Wernicke's aphasia

Yasmeen Faroqi-Shah, Cynthia K Thompson
2003 Brain and Language  
However when auxiliary and past tense morphemes were provided along with the verb stem, production of passive sentences improved drastically for both groups.  ...  The ability to produce active and passive reversible and non-reversible sentences was examined when varying amounts of lexical information was provided.  ...  Support for this notion comes from Bates et al. (1988) , who found the most frequent canonical word order is often preserved in aphasia in several languages.  ... 
doi:10.1016/s0093-934x(02)00586-2 pmid:12744953 pmcid:PMC3034248 fatcat:53cvgushmbdwhl4czmo6xp72ki

Virtual Augmentation Supported Contrastive Learning of Sentence Representations [article]

Dejiao Zhang, Wei Xiao, Henghui Zhu, Xiaofei Ma, Andrew O. Arnold
2022 arXiv   pre-print
We access the performance of VaSCL on a wide range of downstream tasks, and set a new state-of-the-art for unsupervised sentence representation learning.  ...  We tackle this challenge by presenting a Virtual augmentation Supported Contrastive Learning of sentence representations (VaSCL).  ...  ±0.89 62.12±7.09 76.60±0.35 81.40±0.60 77.66±0.64 Table 3 : 3 Few-shot learning evaluation of Intent Classification.  ... 
arXiv:2110.08552v2 fatcat:t374n34vsjhh5fqim6q6xodq2e

Working Memory and Reading Skill Re-examined [chapter]

2016 Attention and Performance XII  
British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library ISBN: 978-1-138-19163-1 (Set) ISBN: 978-1-315-54401-4 (Set) (ebk) ISBN: 978-1-138-  ...  any information storage or retrieval system, without permission in writing from the publishers.  ...  Office of Naval Research (O.N.R. Contract Num ber N0001486G0067 to S. Kornblum ) and by a grant from the Economic and Social Research Council of G reat Britain to M. Coltheart.  ... 
doi:10.4324/9781315630427-36 fatcat:hmaltn6wtng4nnkbn3izodfuki
« Previous Showing results 1 — 15 out of 2,598 results