Filters








177,138 Hits in 5.6 sec

From Frequency to Meaning: Vector Space Models of Semantics

P. D. Turney, P. Pantel
2010 The Journal of Artificial Intelligence Research  
Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text.  ...  Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into  ...  Thanks to the anonymous reviewers of JAIR for their very helpful comments and suggestions.  ... 
doi:10.1613/jair.2934 fatcat:vmbzpass3vezjmmtknzi4zcrre

A DISTRIBUTIONAL STRUCTURED SEMANTIC SPACE FOR QUERYING RDF GRAPH DATA

ANDRÉ FREITAS, EDWARD CURRY, JOÃO GABRIEL OLIVEIRA, SEÁN O'RIAIN
2011 International Journal of Semantic Computing (IJSC)  
The center of the approach relies on the use of a distributional semantic model to address the level of semantic interpretation demanded to build the data model independent approach.  ...  The article analyzes the geometric aspects of the proposed space, providing its description as a distributional structured vector space, which is built upon the Generalized Vector Space Model (GVSM).  ...  The geometric properties which arise in the model can provide a principled way to model the semantics of RDF or, more generally, labelled data graphs, adding to the vector space model structures and operations  ... 
doi:10.1142/s1793351x1100133x fatcat:4ws744oufnf65i632tpaqoapjq

Semantic representation

Apostol (Paul) Natsev, Milind R. Naphade, John R. Smith
2004 Proceedings of the 2004 ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '04  
This is done by constructing a model vector that acts as a compact semantic representation of the underlying content.  ...  In this paper we consider the problem of using such semantic concept detection to map the video clips into semantic spaces.  ...  The benefits of performing some of the above operations in the semantic model vector space, as opposed to the original lowlevel feature space, are validated empirically in Section 5.  ... 
doi:10.1145/1014052.1014133 dblp:conf/kdd/NatsevNS04 fatcat:mpqlht7morgirbpjwvnt5ymqbm

Image Semantic Transformation: Faster, Lighter and Stronger [article]

Dasong Li, Jianbo Wang
2018 arXiv   pre-print
It just takes 3 hours(GTX 1080) to train the models of 10 semantic transformations.  ...  Our model will reconstruct the images and manipulate Euclidean latent vectors to achieve semantic transformations and semantic images arthimetic calculations.  ...  And the mapping from Euclidean space to latent space of Discriminator in BEGAN is pretty easy to train.  ... 
arXiv:1803.09932v1 fatcat:k7grfxfw6vgr7ogniqwc45osvm

RichVSM

Rabeeh Abbasi, Steffen Staab
2009 Proceedings of the 20th ACM conference on Hypertext and hypermedia - HT '09  
We exploit semantic relationships between tags to reduce sparseness in Folksonomies and propose different enriched vector space models.  ...  We also propose a vector space model Best of Breed which utilizes appropriate enrichment method based on the type of the query.  ...  But if we consider higher precision levels (15, 20) , the results of enriched vector space models are better than the results obtained from original vector space model.  ... 
doi:10.1145/1557914.1557952 dblp:conf/ht/AbbasiS09 fatcat:hip2ktln2reungz7vkevnzlly4

Learning a Deep Embedding Model for Zero-Shot Learning

Li Zhang, Tao Xiang, Shaogang Gong
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
Instead of embedding into a semantic space or an intermediate space, we propose to use the visual space as the embedding space.  ...  Zero-shot learning (ZSL) models rely on learning a joint embedding space where both textual/semantic description of object classes and visual representation of object images can be projected to for nearest  ...  The resulting projection direction is from a semantic space, e.g., attribute or word vector, to a visual feature space. Such a direction is opposite to the one adopted by most existing models.  ... 
doi:10.1109/cvpr.2017.321 dblp:conf/cvpr/ZhangXG17 fatcat:lzorgd5k55g3hlcbhopjfnntgu

Learning a Deep Embedding Model for Zero-Shot Learning [article]

Li Zhang, Tao Xiang, Shaogang Gong
2019 arXiv   pre-print
Instead of embedding into a semantic space or an intermediate space, we propose to use the visual space as the embedding space.  ...  Zero-shot learning (ZSL) models rely on learning a joint embedding space where both textual/semantic description of object classes and visual representation of object images can be projected to for nearest  ...  The resulting projection direction is from a semantic space, e.g., attribute or word vector, to a visual feature space. Such a direction is opposite to the one adopted by most existing models.  ... 
arXiv:1611.05088v4 fatcat:usn452kou5gnvfdt4twbf7y57i

Learning Word Embeddings for Hyponymy with Entailment-Based Distributional Semantics [article]

James Henderson
2017 arXiv   pre-print
This paper proposes distributional semantic models which efficiently learn word embeddings for entailment, using a recently-proposed framework for modelling entailment in a vector-space.  ...  Lexical entailment, such as hyponymy, is a fundamental issue in the semantics of natural language.  ...  about the semantics of a word from this model.  ... 
arXiv:1710.02437v1 fatcat:sb6kszs3rveyzont2bdiajmwru

Traces of Meaning Itself: Encoding distributional word vectors in brain activity

Jona Sassenhagen, Christian J. Fiebach
2019 Neurobiology of Language  
We find that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word  ...  However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented.  ...  ACKNOWLEDGMENTS The authors wish to thank Jonathan Grainger and Stephane Dufau for making available the English EEG data set. Edvard Heikel organized the collection of the German EEG data.  ... 
doi:10.1162/nol_a_00003 fatcat:5chluzccbrfsbcmjqptl5eobwm

Predication-based semantic indexing: permutations as a means to encode predications in semantic space

Trevor Cohen, Roger W Schvaneveldt, Thomas C Rindflesch
2009 AMIA Annual Symposium Proceedings  
In this paper, we present a novel vector space model that encodes semantic predications derived from MEDLINE by the SemRep system into a compact spatial representation.  ...  The associations captured by this method are of a different and complementary nature to those derived by traditional vector space models, and the encoding of predication types presents new possibilities  ...  Acknowledgments We would like to acknowledge Dominic Widdows, chief instigator of Semantic Vectors (10), some of which was adapted to this work, and Sahlgren, Holst and Kanerva for their remarkable contribution  ... 
pmid:20351833 pmcid:PMC2815384 fatcat:cxjqtpjotzgijlofagqw5hmdra

Automatic Feature Detection and Clustering Using Random Indexing [chapter]

Haïfa Nakouri, Mohamed Limam
2014 Lecture Notes in Computer Science  
We propose an automatic approach of image parsing, feature extraction, indexing and clustering, showing that the Feature Space model based on Random Indexing captures the semantic relation between similar  ...  The present work explores the possible application of Random Indexing in discovering feature contexts from image data, based on their semantics.  ...  The key idea of a Feature Space model is to assign a vector (generally a sparse vector) to each feature in the high dimensional vector space, whose relative directions are assumed to indicate semantic  ... 
doi:10.1007/978-3-319-07998-1_67 fatcat:w5vckuskdravtkjbbr2vjfkcai

A Multi-class Approach – Building a Visual Classifier based on Textual Descriptions using Zero-Shot Learning [article]

Preeti Jagdish Sajjan, Frank G. Glavin
2020 arXiv   pre-print
In this paper, we overcome the two main hurdles of ML, i.e. scarcity of data and constrained prediction of the classification model.  ...  Machine Learning (ML) techniques for image classification routinely require many labelled images for training the model and while testing, we ought to use images belonging to the same domain as those used  ...  ZSL models typically learn the mapping function that maps the feature space to the semantic vector space.  ... 
arXiv:2011.09236v1 fatcat:j5ndmxuzdzgxhf7cmlh5zksquy

Traces of Meaning Itself: Encoding distributional word vectors in brain activity [article]

Jona Sassenhagen, Christian J. Fiebach
2019 bioRxiv   pre-print
We find that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word  ...  However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented.  ...  Acknowledgements The authors wish to thank Jonathan Grainger and Stephane Dufau for making available the English dataset. Edvard Heikel organized collection of the German EEG data. The MNE-  ... 
doi:10.1101/603837 fatcat:vcjvr5oxlng3fnnzx6rghuerga

A Semantic Vector Retrieval Model for Desktop Documents

Li Sheng
2008 2008 International Conference on Computer Science and Software Engineering  
Comparing with traditional vector space model, the semantic model using semantic and ontology technology to solve several problems that traditional model could not overcome such as the shortcomings of  ...  Finally, the experimental results show that the retrieval ability of our new model has significant improvement both on recall and precision. A Semantic Vector Retrieval Model for Desktop Documents  ...  The main features of semantic vector space model include: 1) The elements and dimension of semantic vector space are different from traditional one.  ... 
doi:10.1109/csse.2008.421 dblp:conf/csse/Sheng08 fatcat:ldsxy4sghbcbhbsz3pndycltuy

A Review on WordNet and Vector Space Analysis for Short-text Semantic Similarity

2017 International Journal of Innovations in Engineering and Technology  
In vector space model words can be represented as numeric vectors based on different semantic similarity measures, the similarity between the word numeric vectors can be calculated with the semantic measures  ...  In this paper the survey of two techniques is done which are helpful in generating the extractive text summaries WordNet and Vector space analysis.  ...  Vector space model (TF-IDF) or term vector model is used for representing the documents as vectors of identifiers such as index terms.  ... 
doi:10.21172/ijiet.81.018 fatcat:7fdugo52c5gonp4e7nmho6kdcq
« Previous Showing results 1 — 15 out of 177,138 results