10,170 Hits in 6.5 sec

Large Scale Image Indexing Using Online Non-negative Semantic Embedding [chapter]

Jorge A. Vanegas, Fabio A. González
2013 Lecture Notes in Computer Science  
can be used to search into the collection using a query-by-example strategy and to annotate new unannotated images.  ...  This paper presents a novel method to address the problem of indexing a large set of images taking advantage of associated multimodal content such as text or tags.  ...  Online Non-negative Semantic Embedding Model When the image associated text has a rich and clean semantic interpretation (e.g. tags provided by experts), the text representation may be used directly as  ... 
doi:10.1007/978-3-642-41822-8_46 fatcat:cnsejphfuzgmtmmpnejkjwo5ya

Scalable Face Image Retrieval Using Attribute-Enhanced Sparse Codewords

Bor-Chun Chen, Yan-Ying Chen, Yin-Hsi Kuo, Winston H. Hsu
2013 IEEE transactions on multimedia  
Index Terms-Content-based image retrieval, face image, human attributes.  ...  retrieval in the offline and online stages.  ...  and images with negative attribute scores will use the other.  ... 
doi:10.1109/tmm.2013.2242460 fatcat:b7qhfd2hbjeulalkpvc3ntxlcu

Graph-RISE: Graph-Regularized Image Semantic Embedding [article]

Da-Cheng Juan, Chun-Ta Lu, Zhen Li, Futang Peng, Aleksei Timofeev, Yi-Ting Chen, Yaxi Gao, Tom Duerig, Andrew Tomkins, Sujith Ravi
2019 arXiv   pre-print
In this paper, we present Graph-Regularized Image Semantic Embedding (Graph-RISE), a large-scale neural graph learning framework that allows us to train embeddings to discriminate an unprecedented O(40M  ...  Graph-RISE outperforms state-of-the-art image embedding algorithms on several evaluation tasks, including image classification and triplet ranking.  ...  We also thank Expander, Image Understanding and several related teams for the technical support.  ... 
arXiv:1902.10814v1 fatcat:m5a6vg7yz5g5bcayu3st7lua2m

Deep LSTM for Emoji Twitter Sentiment Exploration via Distributed Embeddings

We first train our model to learn word, emoji embeddings under positive and negative tweets; later a classifier passes them through a neural network combining LSTM to achieve better performance.  ...  Social media's sentimental data is the most vital digital marketing platform that can help us to reveal the real world events including qualitative insights to understand people's visibility about brands  ...  Vocabulary size is equivalent to the largest words index in the corpus.  ... 
doi:10.35940/ijitee.k1521.0981119 fatcat:vnzwstalrbgb5pllng5mojzwaq

SESA: Supervised Explicit Semantic Analysis [article]

Dasha Bogdanova, Majid Yazdani
2017 arXiv   pre-print
We use RNNs to embed text input into this space. In addition to interpretability, our model makes use of the web-scale collaborative skills data that is provided by users for each LinkedIn profile.  ...  We propose a novel model of representation learning called Supervised Explicit Semantic Analysis (SESA) that is trained in a supervised fashion to embed items to a set of dimensions with explicit semantics  ...  Similarly, in [20] images and words are embedded to a same latent space for image tagging task.  ... 
arXiv:1708.03246v1 fatcat:fflftoag55dc7hrf242gneoyhu

The Emerging Trends of Multi-Label Learning [article]

Weiwei Liu, Xiaobo Shen, Haobo Wang, Ivor W. Tsang
2020 arXiv   pre-print
BMLS [109] jointly learns a low-rank embedding of the label matrix and the label co-occurrence matrix using an Poisson-Dirichlet-gamma non-negative factorization method [110] .  ...  Word embeddings [63] have been successfully used for learning non-linear representations of text data for natural language processing (NLP) tasks, such as understanding word and document semantics and  ... 
arXiv:2011.11197v2 fatcat:hu6w4vgnwbcqrinrdfytmmjbjm

Context-aware Image Tweet Modelling and Recommendation

Tao Chen, Xiangnan He, Min-Yen Kan
2016 Proceedings of the 2016 ACM on Multimedia Conference - MM '16  
embedded URL, and 4) the Web as a whole.  ...  ' semantics.  ...  This validates the usefulness of external knowledge for interpreting images' semantics in social media.  ... 
doi:10.1145/2964284.2964291 dblp:conf/mm/ChenHK16 fatcat:nepnjdu5vbffbhrhkisac5p5fe

Deep Metric Learning with Alternating Projections onto Feasible Sets [article]

Oğul Can, Yeti Ziya Gürbüz, A. Aydın Alatan
2021 arXiv   pre-print
The proposed technique is applied with the well-accepted losses and evaluated on Stanford Online Products, CAR196 and CUB200-2011 datasets for image retrieval and clustering.  ...  To this end, we reformulate distance metric learning problem as finding a feasible point of a constraint set where the embedding vectors of the training data satisfy desired intra-class and inter-class  ...  The first 100 species (5,864 images) are split for training, the rest of 100 species (5,924 images) are used for testing.  ... 
arXiv:1907.07585v3 fatcat:y2bfgwccc5elnibmtodiqzcp6q

Unsupervised Topic Hypergraph Hashing for Efficient Mobile Image Retrieval

Lei Zhu, Jialie Shen, Liang Xie, Zhiyong Cheng
2017 IEEE Transactions on Cybernetics  
In our method, relations between images and semantic topics are first discovered via robust collective non-negative matrix factorization.  ...  Index Terms-High-order semantic correlations, mobile image retrieval, topic hypergraph hashing (THH).  ...  By imposing non-negative constraints, basis matrix can be considered as latently embedded semantic topics and each column corresponds to one topic.  ... 
doi:10.1109/tcyb.2016.2591068 pmid:28113794 fatcat:544qshpsunhktceq4ipo2zdc2m

Multi-modal joint embedding for fashion product retrieval

A. Rubio, LongLong Yu, E. Simo-Serra, F. Moreno-Noguer
2017 2017 IEEE International Conference on Image Processing (ICIP)  
to have semantic meaning.  ...  We train this embedding using large-scale real world e-commerce data by both minimizing the similarity between related products and using auxiliary classification networks to that encourage the embedding  ...  The class labels are used for the classification losses and for randomly sampling negatives for training the embedding.  ... 
doi:10.1109/icip.2017.8296311 dblp:conf/icip/RubioYSM17 fatcat:wi4onryz6rcfxnc2e5jcrslzym

Text classification with word embedding regularization and soft similarity measure [article]

Vít Novotný, Eniafe Festus Ayetiran, Michal Štefánik, and Petr Sojka
2020 arXiv   pre-print
For evaluation, we use the kNN classifier and six standard datasets: BBCSPORT, TWITTER, OHSUMED, REUTERS-21578, AMAZON, and 20NEWS. We show 39 embeddings compared to non-regularized word embeddings.  ...  We describe a practical procedure for deriving such regularized embeddings through Cholesky factorization.  ...  As a result, word embeddings don't help us separate positive and negative documents.  ... 
arXiv:2003.05019v1 fatcat:r77oqlmrxvgffjbpady3p4qwmq

Understanding Pixel-level 2D Image Semantics with 3D Keypoint Knowledge Engine

Yang You, Chengkun Li, Yujing Lou, Zhoujun Cheng, Liangwei Li, Lizhuang Ma, Weiming Wang, Cewu Lu
2021 IEEE Transactions on Pattern Analysis and Machine Intelligence  
In this paper, we propose a new method on predicting image corresponding semantics in 3D domain and then projecting them back onto 2D images to achieve pixel-level understanding.  ...  In order to obtain reliable 3D semantic labels that are absent in current image datasets, we build a large scale keypoint knowledge engine called KeypointNet, which contains 103,450 keypoints and 8,234  ...  In practice, this loss is optimized in an online batch estimation, with hardest negative pair selection.  ... 
doi:10.1109/tpami.2021.3072659 pmid:33848241 fatcat:fw67ych3cfeihnhtsxuqxfxrra

Learning Word Representations with Hierarchical Sparse Coding [article]

Dani Yogatama and Manaal Faruqui and Chris Dyer and Noah A. Smith
2014 arXiv   pre-print
We propose a new method for learning word representations using hierarchical regularization in sparse coding inspired by the linguistic study of word meanings.  ...  (2012) : a word representation trained using non-negative sparse embedding (NNSE) on our corpus. Similar to the authors, we use an NNSE implementation from .  ...  The x-axis shows the original dimension index, we show the dimensions from the most negative (left) to the most positive (right), within each block, for readability.  ... 
arXiv:1406.2035v2 fatcat:uiefnlylkzhjtaflketzj2tvn4

VGSE: Visually-Grounded Semantic Embeddings for Zero-Shot Learning [article]

Wenjia Xu, Yongqin Xian, Jiuniu Wang, Bernt Schiele, Zeynep Akata
2022 arXiv   pre-print
Our model visually divides a set of images from seen classes into clusters of local image regions according to their visual similarity, and further imposes their class discrimination and semantic relatedness  ...  To associate these clusters with previously unseen classes, we use external knowledge, e.g., word embeddings and propose a novel class relation discovery module.  ...  Previous works tackle this problem by using word embeddings for class names [29, 36] , or semantic embeddings from online encyclopedia articles [3, 37, 63] .  ... 
arXiv:2203.10444v1 fatcat:az7jtlhcvfh5lcg4mbewiva3j4

Learning the Best Pooling Strategy for Visual Semantic Embedding [article]

Jiacheng Chen, Hexiang Hu, Hao Wu, Yuning Jiang, Changhu Wang
2021 arXiv   pre-print
Visual Semantic Embedding (VSE) is a dominant approach for vision-language retrieval, which aims at learning a deep embedding space such that visual data are embedded close to their semantic text labels  ...  Recent VSE models use complex methods to better contextualize and aggregate multi-modal features into holistic embeddings.  ...  [11] used this approach for zero-shot image recognition [2, 25, 35] , via matching visual embeddings with semantic word embeddings. Kiros et al.  ... 
arXiv:2011.04305v5 fatcat:5sj2vbji2zbpfd34632vernmye
« Previous Showing results 1 — 15 out of 10,170 results