A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Supervised Coupled Dictionary Learning with Group Structures for Multi-modal Retrieval
2013
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
A better similarity mapping function across heterogeneous high-dimensional features is very desirable for many applications involving multi-modal data. In this paper, we introduce coupled dictionary learning (DL) into supervised sparse coding for multi-modal (cross-media) retrieval. We call this Supervised coupled dictionary learning with group structures for Multi-Modal retrieval (SliM2). SliM2 formulates the multi-modal mapping as a constrained dictionary learning problem. By utilizing the
doi:10.1609/aaai.v27i1.8603
fatcat:6cm5hkgr4nboreuzgpngb4cv34