Filters








187 Hits in 7.9 sec

Semantic Consistency Cross-modal Retrieval with Semi-supervised Graph Regularization

Gongwen Xu, Xiaomei Li, Zhijun Zhang
2020 IEEE Access  
Most of the existing cross-modal retrieval methods make use of labeled data to learn projection matrices for different modal data.  ...  INDEX TERMS Cross-modal retrieval, semi-supervised, graph regularization, subspace learning.  ...  Joint latent subspace learning and regression (JLSLR) [29] uses spectral regression when learning potential subspaces.  ... 
doi:10.1109/access.2020.2966220 fatcat:pqaj2o6hdzhczgbsr2dkzl7ssy

Cross-Modal Learning via Pairwise Constraints [article]

Ran He and Man Zhang and Liang Wang and Ye Ji and Qiyue Yin
2014 arXiv   pre-print
For unsupervised learning, we propose a cross-modal subspace clustering method to learn a common structure for different modalities.  ...  For supervised learning, to reduce the semantic gap and the outliers in pairwise constraints, we propose a cross-modal matching method based on compound ?  ...  projection matrix to project different modalities into a common subspace for cross-modal retrieval/classification, and Z I can be a group indicator matrix to represent different semantic groups [36]  ... 
arXiv:1411.7798v1 fatcat:pp77pnvwmvftnkrwql5gu4my34

Survey on Deep Multi-modal Data Analytics: Collaboration, Rivalry and Fusion [article]

Yang Wang
2020 arXiv   pre-print
Throughout this survey, we further indicate that the critical components for this field go to collaboration, adversarial competition and fusion over multi-modal spaces.  ...  With the development of web technology, multi-modal or multi-view data has surged as a major stream for big data, where each modal/view encodes individual property of data objects.  ...  [104] constructed a Dictionary Learning based Adversarial Cross-Modal Retrieval (DLA-CMR).  ... 
arXiv:2006.08159v1 fatcat:g4467zmutndglmy35n3eyfwxku

Cross-view Feature Learning via Structures Unlocking based on Robust Low-Rank Constraint

Ao Li, Yu Ding, Deyun Chen, Guanglu Sun, Hailong Jiang, Qidi Wu
2020 IEEE Access  
Finally, the joint semantic consensus constraint is designed to be integrated into the learning framework, which can explore the shared and view-specific information for enforcing the view-invariant character  ...  Firstly, to unlock the latent class structure and view structure, a self-expressed model by dual low-rank constraints are presented, which can separate the two manifold structures in the learned subspace  ...  When the semantic labels are available, a set of supervised feature learning methods for cross-modal data have been proposed, such as Supervised coupled-dictionary learning with group structures for multi-modal  ... 
doi:10.1109/access.2020.2978548 fatcat:5kzrlfej6zf4bhf6jhimwvaiim

Self-supervised asymmetric deep hashing with margin-scalable constraint [article]

Zhengyang Yu, Song Wu, Zhihao Dou, Erwin M.Bakker
2021 arXiv   pre-print
SADH implements a self-supervised network to sufficiently preserve semantic information in a semantic feature dictionary and a semantic code dictionary for the semantics of the given dataset, which efficiently  ...  and precisely guides a feature learning network to preserve multilabel semantic information using an asymmetric learning strategy.  ...  Acknowledgements This work was supported by the National Natural Science Foundation of China (61806168), Fundamental Research Funds for the Central Universities (SWU117059), and Venture & Innovation Support  ... 
arXiv:2012.03820v3 fatcat:fscm4ggdyrct3o6kso53mmriou

Multi-Paced Dictionary Learning for cross-domain retrieval and recognition

Dan Xu, Jingkuan Song, Xavier Alameda-Pineda, Elisa Ricci, Nicu Sebe
2016 2016 23rd International Conference on Pattern Recognition (ICPR)  
The chosen baselines can be grouped by the way they achieve cross-modal retrieval: CFA and CCA learn modality-specific subspaces; LCMH, CVH, CMLSSH and PLS find a projection matrix by means of spectral  ...  MAP for the cross-modal retrieval tasks as a function of the dictionary size for MPDL and MPDL (µ → ∞) on the Wikipedia dataset. dictionary based approaches.  ... 
doi:10.1109/icpr.2016.7900132 dblp:conf/icpr/XuSARS16 fatcat:rciwsjryffbape6j7pqsj6tdq4

Harmonized Multimodal Learning with Gaussian Process Latent Variable Models [article]

Guoli Song, Shuhui Wang, Qingming Huang, Qi Tian
2019 arXiv   pre-print
Experimental results on four benchmark datasets show that the proposed models outperform the strong baselines for cross-modal retrieval tasks, and that the harmonized multimodal learning method is superior  ...  Multimodal learning aims to discover the relationship between multiple modalities. It has become an important research topic due to extensive multimodal applications such as cross-modal retrieval.  ...  ACKNOWLEDGMENTS The authors would like to thank the associate editor and the reviewers for their time and effort provided to review the manuscript.  ... 
arXiv:1908.04979v1 fatcat:hpjdzvau3fhzrnk5lz6ahjqeuu

Deep Learning Techniques for Future Intelligent Cross-Media Retrieval [article]

Sadaqat ur Rehman, Muhammad Waqas, Shanshan Tu, Anis Koubaa, Obaid ur Rehman, Jawad Ahmad, Muhammad Hanif, Zhu Han
2020 arXiv   pre-print
and the potential solutions of deep learning assisted cross-media retrieval.  ...  In this paper, we provide a novel taxonomy according to the challenges faced by multi-modal deep learning approaches in solving cross-media retrieval, namely: representation, alignment, and translation  ...  Multi-modal Stacked Auto-Encoders (MSAE) model [113] is used to project features from cross-modality into a common latent space for efficient cross-modal retrieval.  ... 
arXiv:2008.01191v1 fatcat:t63bg55w2vdqjcprzaaidrmprq

Cross-media semantic representation via bi-directional learning to rank

Fei Wu, Xinyan Lu, Zhongfei Zhang, Shuicheng Yan, Yong Rui, Yueting Zhuang
2013 Proceedings of the 21st ACM international conference on Multimedia - MM '13  
The latent space embedding is discriminatively learned by the structural large margin learning for optimization with certain ranking criteria (mean average precision in this paper) directly.  ...  We propose a general cross-media ranking algorithm to optimize the bi-directional listwise ranking loss with a latent space embedding, which we call Bi-directional Cross-Media Semantic Representation Model  ...  Motivated by the fact that dictionary learning (DL) methods have the intrinsic power of capturing the heterogeneous features by generating different dictionaries for multi-modal data, multi-modal dictionary  ... 
doi:10.1145/2502081.2502097 dblp:conf/mm/WuLZYRZ13 fatcat:ni7x2naeavcgdalyqnrg4ug464

A low rank structural large margin method for cross-modal ranking

Xinyan Lu, Fei Wu, Siliang Tang, Zhongfei Zhang, Xiaofei He, Yueting Zhuang
2013 Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval - SIGIR '13  
, which we call Latent Semantic Cross-Modal Ranking (LSCMR).  ...  The latent lowrank embedding space is discriminatively learned by structural large margin learning to optimize for certain ranking criteria directly.  ...  Algorithm 1 Latent Semantic Cross-Modal Ranking (LSCMR).  ... 
doi:10.1145/2484028.2484039 dblp:conf/sigir/LuWTZHZ13 fatcat:2cn4fjzz4vhizck2tok5u6tj3u

A Cross-Media Advertising Design and Communication Model Based on Feature Subspace Learning

Shanshan Li, Gengxin Sun
2022 Computational Intelligence and Neuroscience  
This paper uses feature subspace learning and cross-media retrieval analysis to construct an advertising design and communication model.  ...  Based on the common subspace learning, this paper uses the extreme learning machine method to improve the cross-modal retrieval accuracy, mining deeper data features and maximizing the correlation between  ...  In this chapter, a task-oriented cross-modal retrieval method with joint linear discriminant and graph regularity is proposed with different mapping mechanisms for different retrieval tasks. e correlation  ... 
doi:10.1155/2022/5874722 pmid:35619757 pmcid:PMC9129948 fatcat:qsh45riw7vfhfjrvorwwamf7ne

Deep Multi-Semantic Fusion-Based Cross-Modal Hashing

Xinghui Zhu, Liewu Cai, Zhuoyang Zou, Lei Zhu
2022 Mathematics  
However, the existing deep hashing methods cannot consider multi-label semantic learning and cross-modal similarity learning simultaneously.  ...  Due to the low costs of its storage and search, the cross-modal retrieval hashing method has received much research interest in the big data era.  ...  Then, the learned latent semantic features are mapped to a joint common subspace.  ... 
doi:10.3390/math10030430 fatcat:yri6dbd53zglhc77wtpswxgsoa

Shared Predictive Cross-Modal Deep Quantization [article]

Erkun Yang, Cheng Deng, Chao Li, Wei Liu, Jie Li, Dacheng Tao
2019 arXiv   pre-print
This paper presents a deep compact code learning solution for efficient cross-modal similarity search.  ...  Our approach, dubbed shared predictive deep quantization (SPDQ), explicitly formulates a shared subspace across different modalities and two private subspaces for individual modalities, and representations  ...  among multiple modalities and learning compact codes of higher quality in a joint deep network architecture.  ... 
arXiv:1904.07488v1 fatcat:gux4ioijofcbxfwwgltcfklsma

Kernelizing Semantic Similarity Measurement Using Bi-directional Learning Ranking for Cross-Modal Retrieval

SHUANG LIU, LIANG BAI, XIANG-AN HENG, YAN-LI HU
2018 DEStech Transactions on Computer Science and Engineering  
which is more suitable for cross-modal retrieval tasks.  ...  Aiming at measuring the inner semantic similarities between different modal data, cross-modal retrieval tries to map heterogenous features to a hidden common subspace in which they can be reasonably compared  ...  Figure 1 . 1 T Kernelizing Semantic Similarity Measurement Using Bi-directional Learning Ranking for Cross-Modal Retrieval.  ... 
doi:10.12783/dtcse/ceic2018/24530 fatcat:daslx4hzyfeq3oxsdvcdp5qsk4

Cross-Modality Submodular Dictionary Learning for Information Retrieval

Fan Zhu, Ling Shao, Mengyang Yu
2014 Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management - CIKM '14  
A greedy dictionary construction approach is introduced for learning an isomorphic feature space, to which cross-modality data can be adapted while data smoothness is guaranteed.  ...  The proposed objective function consists of two reconstruction error terms for both modalities and a Maximum Mean Discrepancy (MMD) term that measures the cross-modality discrepancy.  ...  In this work, we propose a cross-modality submodular dictionary learning (CmSDL) method for the cross-modality imagetext retrieval problem.  ... 
doi:10.1145/2661829.2661926 dblp:conf/cikm/ZhuSY14 fatcat:v2s5wejzindlbps36migg4vaki
« Previous Showing results 1 — 15 out of 187 results