Filters








198,200 Hits in 5.8 sec

Significant Words Language Models for Contextual Suggestion

Mostafa Dehghani, Jaap Kamps, Hosein Azarbonyad, Maarten Marx
2016 Text Retrieval Conference  
Here, we describe our approach which is employing Significant Words Language Models (SWLM) [2] as an effective method for estimating models representing significant features of sets of attractions as user  ...  One of the key steps of contextual suggestion methods is estimating a proper model for representing different objects in the data like users and attractions.  ...  ESTIMATING EFFECTIVE PROFILES In this section, we explain how to estimate significant words language models and how to use them in contextual suggestion task.  ... 
dblp:conf/trec/DehghaniKAM16 fatcat:4peqdnvpofhw7omyc7gyiij2ie

Contextual predictability influences word and morpheme duration in a morphologically complex language (Kaqchikel Mayan)

Kevin Tang, Ryan Bennett
2018 Journal of the Acoustical Society of America  
It is found that the contextual predictability of a word has a significant effect on its duration. The effect is manifested differently for lexical words and function words.  ...  In this paper the question of whether the probabilistic reduction effect, as previously observed for majority languages like English, is also found in a language (Kaqchikel Mayan) which has relatively  ...  "Topic and focus in Mayan," Language 68(1), 43-80. Aissen, J. (1999). "External possessor and logical subject in Tz'utujil," in External Possession, edited by D. Payne and I.  ... 
doi:10.1121/1.5046095 pmid:30180666 fatcat:bt3xco46qvhfpbnirt7tuldfx4

Thinking ahead: prediction in context as a keystone of language in humans and machines [article]

Ariel Goldstein, Zaid Zada, Eliav Buchnik, Mariano Schain, Amy Price, Bobbi Aubrey, Samuel A Nastase, Amir Feder, Dotan Emanuel, Alon Cohen, Aren Jansen, Harshvardhan Gazula (+16 others)
2020 bioRxiv   pre-print
Our findings suggest that deep language models provide an important step toward creating a biologically feasible computational framework for generative language.  ...  Departing from classical rule-based linguistic models, advances in deep learning have led to the development of a new family of self-supervised deep language models (DLMs).  ...  B) Encoding model performance for contextual embeddings (GPT2) aggregated across all electrodes with significant encoding for GloVe(Fig. 3B): contextual embeddings (purple), static embeddings (GloVe, blue  ... 
doi:10.1101/2020.12.02.403477 fatcat:h2nvxq6m75gjbagdjsxqj3qeay

How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings

Kawin Ethayarajh
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
For one, we find that the contextualized representations of all words are not isotropic in any layer of the contextualizing model.  ...  Replacing static word embeddings with contextualized word representations has yielded significant improvements on many NLP tasks.  ...  Acknowledgments We thank the anonymous reviewers for their insightful comments. We thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for their financial support.  ... 
doi:10.18653/v1/d19-1006 dblp:conf/emnlp/Ethayarajh19 fatcat:5ao7kjq3nzdq5mqsg3bcn4c5zm

Brain embeddings with shared geometry to artificial contextual embeddings, as a code for representing language in the human brain [article]

Ariel Goldstein, Avigail Dabush, Bobbi Aubrey, Mariano Schain, Samuel A. Nastase, Zaid Zada, Eric Ham, Zhuoqiao Hong, Amir Feder, Harshvardhan Gazula, Eliav Buchnik, Werner Doyle (+8 others)
2022 bioRxiv   pre-print
AbstractContextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language.  ...  From these fine-grained spatiotemporal neural recordings, we derived for each patient a continuous vectorial representation for each word (i.e., a brain embedding).  ...  In order to test whether there was a significant difference between the performance of the model using the actual contextual embedding for the test words compared to the performance using the nearest word  ... 
doi:10.1101/2022.03.01.482586 fatcat:xisd6a7kdvbdvphgr6wz2ywbdi

Incorporating Context into Language Encoding Models for fMRI [article]

Shailee Jain, Alexander Huth
2018 bioRxiv   pre-print
language model and contextual information.  ...  In this work, we instead build encoding models using rich contextual representations derived from an LSTM language model.  ...  Acknowledgments We thank Jack Gallant, Wendy de Heer, Frederic Theunissen, and Thomas Griffiths for helping design the fMRI experiment and collect the data used here; Brittany Griffin and Anwar Nuñez for  ... 
doi:10.1101/327601 fatcat:3hty7frxmrgwfmfargwtczsfrm

How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings [article]

Kawin Ethayarajh
2019 arXiv   pre-print
For one, we find that the contextualized representations of all words are not isotropic in any layer of the contextualizing model.  ...  Replacing static word embeddings with contextualized word representations has yielded significant improvements on many NLP tasks.  ...  Acknowledgments We thank the anonymous reviewers for their insightful comments. We thank the Natural Sciences and Engineering Research Council of Canada (NSERC) for their financial support.  ... 
arXiv:1909.00512v1 fatcat:tptikjombrhw3l2ikulohaekrq

Effects of fluency, oral language, and executive function on reading comprehension performance

Laurie E. Cutting, April Materek, Carolyn A. S. Cole, Terry M. Levine, E. Mark Mahone
2009 Annals of Dyslexia  
Results indicated that TD and S-RCD participants read isolated words at a faster rate than participants with GRD; however, both RD groups had contextual word fluency and oral language weaknesses.  ...  To this end, the present study investigated isolated and contextual word fluency, oral language, and executive function on reading comprehension performance in 56 9-to 14-year-old children [21 typically  ...  The authors thank Sarah Eason for her assistance with data collection.  ... 
doi:10.1007/s11881-009-0022-0 pmid:19396550 pmcid:PMC2757040 fatcat:rvejwlifz5g25bs7wzcyy6nndq

Word class flexibility: A deep contextualized approach [article]

Bai Li, Guillaume Thomas, Yang Xu, Frank Rudzicz
2020 arXiv   pre-print
Our work highlights the utility of deep contextualized models in linguistic typology.  ...  Our method builds on recent work in contextualized word embeddings to quantify semantic shift between word classes (e.g., noun-to-verb, verb-to-noun), and we apply this method to 37 languages.  ...  Contextualized language models Deep contextualized language models take a sequence of natural language tokens and produce a sequence of context-sensitive embeddings for each token.  ... 
arXiv:2009.09241v1 fatcat:nxsfohxw4rfopiserotc23zmbu

Correspondence between the layered structure of deep language models and temporal structure of natural language processing in the human brain [article]

Ariel Goldstein, Eric Ham, Samuel A Nastase, Zaid Zada, Avigail Dabush, Bobbi Bobbi Aubrey, Mariano Schain, Harshvardhan Gazula, Amir Feder, Werner Doyle, Sasha Devore, Patricia Dugan (+7 others)
2022 bioRxiv   pre-print
We supplied this same narrative to a high-performing DLM (GPT2-XL) and extracted the contextual embeddings for each word in the story across all 48 layers of the model.  ...  Deep language models (DLMs) provide a novel computational paradigm for how the brain processes natural language.  ...  This finding suggests that the dynamic of neural responses in human language areas is systematically different for predictable and unpredictable words.  ... 
doi:10.1101/2022.07.11.499562 fatcat:jxihpeyijzgdtkfou6iuizvnbu

Polyglot Contextual Representations Improve Crosslingual Transfer [article]

Phoebe Mulcaire, Jungo Kasai, Noah A. Smith
2019 arXiv   pre-print
We introduce Rosita, a method to produce multilingual contextual word representations by training a single language model on text from multiple languages.  ...  Our method combines the advantages of contextual word representations with those of multilingual representation learning.  ...  Acknowledgments The authors thank Mark Neumann for assistance with the AllenNLP library and the anonymous reviewers for their helpful feedback.  ... 
arXiv:1902.09697v2 fatcat:pbn2muy3qbecjhkb3w7hj5ysea

The arbitrariness of the sign: Learning advantages from the structure of the vocabulary

Padraic Monaghan, Morten H. Christiansen, Stanka A. Fitneva
2011 Journal of experimental psychology. General  
For both the simulations and the behavioral studies, we found that the optimal structure of the vocabulary for learning incorporated this division of labor.  ...  Recent research has demonstrated that systematic mappings between phonological word forms and their meanings can facilitate language learning (e.g., in the form of sound symbolism or cues to grammatical  ...  The experimental and modeling results combine to suggest that a division of labor is expressed within the structure of the form-meaning mappings in natural languages: arbitrariness supports meaning individuation  ... 
doi:10.1037/a0022924 pmid:21517205 fatcat:hjsnqe2yf5ht3h45nro242wubq

Zipfian frequency distributions facilitate word segmentation in context

Chigusa Kurumada, Stephan C. Meylan, Michael C. Frank
2013 Cognition  
Word frequencies in natural language follow a highly skewed Zipfian distribution, but the consequences of this distribution for language acquisition are only beginning to be understood.  ...  Typically, learning experiments that are meant to simulate language acquisition use uniform word frequency distributions.  ...  Florian Jaeger, Noah Goodman, Josh Tenenbaum, and the members of the Stanford Language and Cognition Lab for valuable discussion.  ... 
doi:10.1016/j.cognition.2013.02.002 pmid:23558340 fatcat:6f5vlarc5rglvo6hqr4pp6yg6m

Evaluating Contextual Embeddings and their Extraction Layers for Depression Assessment [article]

Matthew Matero, Albert Hung, H. Andrew Schwartz
2022 arXiv   pre-print
At the same time, pre-trained contextual word embedding models have grown to dominate much of NLP but little is known empirically on how to best apply them for mental health assessment.  ...  Using degree of depression as a case study, we do an empirical analysis on which off-the-shelf language model, individual layers, and combinations of layers seem most promising when applied to human-level  ...  Each model is used to encode a 768 dimensional vector for all words that are then averaged to a user representation.  ... 
arXiv:2112.13795v2 fatcat:rvdseown3bffrondslvt5oarym

Polyglot Contextual Representations Improve Crosslingual Transfer

Phoebe Mulcaire, Jungo Kasai, Noah A. Smith
2019 Proceedings of the 2019 Conference of the North  
We introduce Rosita, a method to produce multilingual contextual word representations by training a single language model on text from multiple languages.  ...  Our method combines the advantages of contextual word representations with those of multilingual representation learning.  ...  Acknowledgments The authors thank Mark Neumann for assistance with the AllenNLP library and the anonymous reviewers for their helpful feedback.  ... 
doi:10.18653/v1/n19-1392 dblp:conf/naacl/MulcaireKS19 fatcat:hb3yqssp7rgcxjnbnxpubbdlsy
« Previous Showing results 1 — 15 out of 198,200 results