A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2016; you can also visit the original URL.
The file type is
Vector space models of word representation are often evaluated using human similarity ratings. Those ratings are elicited in explicit tasks and have well-known subjective biases. ... As an alternative, we propose evaluating vector spaces using implicit cognitive measures. ... These positive results suggest that some of the implicit relation structure in the human brain is already reflected in current vector space models, and that it is in fact feasible to evaluate relation ...doi:10.18653/v1/w16-2513 dblp:conf/repeval/EttingerL16 fatcat:hvudyy6g2bekfai6lza2ce4agq
A comparison of vector similarity with human reaction times in a single-word priming experiment is presented. ... These vectors provide the basis for a representational model of semantic memory, hyperspace analogue to language (HAL). ... In other experiments, semantic vectors generated by using HAL have accounted for a range of semantic and associative priming results using stimuli from various investigators (Lund et al., 1995) . ...doi:10.3758/bf03204766 fatcat:eniguyjdbje2djxjlmi5lkdsaq
The present paper describes WEISS (Word-Embeddings Italian Semantic Space), a distributional semantic model based on Italian. ... Highlights: • Distributional semantics provides valid computational models in psycholinguistics • For many languages, such models are unavailable in easy-to-access formats • A model is proposed for Italian ... I would like to thank Paweł Mandera for technical support concerning SNAUT and useful discussions about model development, and Cristina Burani and Francesca Peressotti for sharing the data used for model ...doi:10.2298/psi161208011m fatcat:s45e35bc3fcftdsywv3yce3znm
The approach is implemented in the hyperspace analogue to language (HAL) model of memory, which uses a simple global co-occurrence learning algorithm to encode the context in which words occur. ... Results are presented, and the argument is made that this simple process can ultimately provide the language-comprehension system with semantic and grammatical information required in the comprehension ... One set of stimuli that we have evaluated in detail using the HAL model is that used by Shelton and Martin (1992) . ...doi:10.3758/bf03200643 fatcat:ekjojtdz7fd6bmilio6tduu234
Dependency-Based Construction of Semantic Space Models Sebastian Pado* Saarland University Mirella Lapata** University of Edinburgh Traditionally, vector-based semantic space models use word co-occurrence ... We evaluate our framework on a range of tasks relevant for cognitive science and natural language processing: semantic priming, synonymy detection, and word sense disambiguation. ...
Word pair similarities are compared to reaction times of subjects in large scale lexical decision and naming tasks under semantic priming. ... This work presents a framework for word similarity evaluation grounded on cognitive sciences experimental data. ... We used a centered window of size 10 and generated vectors with 100 dimensions for all 6 models. ...doi:10.18653/v1/w17-5304 dblp:conf/repeval/AugusteRF17 fatcat:d7llb54sp5h3jdtjjcad4wyztq
Only the components with the highest variance are retained, thus creating a reduced semantic space: As few as 100 or 200 dimensions are required to model several characteristics of human semantic memory ... HAL has been criticized for the strong influence of frequency on both the resulting vectors and the distances between word vectors. ... Only the components with the highest variance are retained, thus creating a reduced semantic space: As few as 100 or 200 dimensions are required to model several characteristics of human semantic memory ...doi:10.3758/brm.40.3.705 pmid:18697665 fatcat:3q3ibi2jincpnevdqttxb56n4m
human semantic representation. ... problems, (3) evaluate various state-of-the-art representation models on this task, and (4) discuss the relationship between WA and prior evaluations of semantic representation with well-known similarity ... of vector space models in distributional semantics (Bruni et al., 2014; , inter alia). ...doi:10.18653/v1/e17-1016 dblp:conf/eacl/KorhonenVK17 fatcat:ckrjnytvhzautpyg2p5ueda3xe
Behavior Research Methods
In each case, LSA was first used to derive a semantic space by training LSA on a corpus of text, thereby providing LSA with extensive examples of words across a wide range of contexts. ... In a similar vein, we can evaluate LSA’s prox¬ imity ratings of concepts in semantic spaces and compare them with the proximity ratings of participants, we can measure the degree to which LSA’s representation ...
The result was that the bilingual semantic space with Japanese as a pivot language, which is predicted to be a model for L1 Japanese/L2 English sequential bilinguals, achieved better performance in simulating ... These monolingual semantic spaces are then converted into ones with common dimensions, which are in turn integrated into a single multilingual semantic space. ... Despite their simplicity, DSMs have provided a useful framework for cognitive modeling, especially for human semantic knowledge (e.g., Jones, Kintsch, & Mewhort, 2006; Landauer & Dumais, 1997) . ...dblp:conf/eapcogsci/Utsumi15 fatcat:h6a3vv2tqvbhne2ifp5tgsv7o4
Traditionally, vector-based semantic space models use word co-occurrence counts from large corpora to represent lexical meaning. ... We evaluate our framework on a range of tasks relevant for cognitive science and natural language processing: semantic priming, synonymy detection and word sense disambiguation. ... Acknowledgments We are grateful to Diana McCarthy for providing us with the results of her system on our data. ...doi:10.1162/coli.2007.33.2.161 fatcat:hshxmpn6ifblrlxxqx2ht2x2xq
This package enables a variety of functions and computations based on Vector Semantic Models such as Latent Semantic Analysis (LSA) Landauer, Foltz and Laham (Discourse Processes 25:259-284, 1998), which ... LSAfun uses precreated LSA spaces and provides functions for (a) Similarity Computations between words, word lists, and documents; (b) Neighborhood Computations, such as obtaining a word's or document's ... We also want to thank Jon Willits for providing us with the CHILDES data. Special thanks to our student assistant Simon Thielebein for valuable software support. ...doi:10.3758/s13428-014-0529-0 pmid:25425391 fatcat:7hxczawyjfaqlgjifmkboysvga
Our findings suggest that humans respond with words closer to the cue within the context embedding space (rather than the word embedding space), when asked to generate thematically related words. ... Word embeddings obtained from neural network models such as Word2Vec Skipgram have become popular representations of word meaning and have been evaluated on a variety of word similarity and relatedness ... Asymmetries are the norm in semantic priming data, leading to the early theoretical prominence of spreading activation models to account for human data. ...doi:10.18653/v1/n18-1062 dblp:conf/naacl/AsrZJ18 fatcat:kxx3euprnrcjffoukibzq3dh74
Using various settings in semantic priming, we have carried out a thorough evaluation by comparing our approach to a number of state-of-the-art methods on six annotation corpora in different domains, i.e ... Experimental results on semantic priming suggest that our approach outperforms those state-of-the-art methods considerably in various aspects. ... Finally, we describe the evaluation criteria used in semantic priming. ...arXiv:1506.05514v1 fatcat:33d622ul65ad3jd2yhsfcou2iu
Using various settings in semantic priming, we have carried out a thorough evaluation by comparing our approach to a number of state-of-the-art methods on six annotation corpora in different domains, i.e ... Experimental results on semantic priming suggest that our approach outperforms those state-of-the-art methods considerably in various aspects. ... Finally, we describe the evaluation criteria used in semantic priming. ...doi:10.1016/j.neunet.2016.01.004 pmid:26874967 fatcat:wzz5n7gvbzckhdpxbtirwnn73y
« Previous Showing results 1 — 15 out of 9,595 results