A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Learning Cross-Modal Deep Embeddings for Multi-Object Image Retrieval using Text and Sketch
[article]
2018
arXiv
pre-print
In this work we introduce a cross modal image retrieval system that allows both text and sketch as input modalities for the query. A cross-modal deep network architecture is formulated to jointly model the sketch and text input modalities as well as the the image output modality, learning a common embedding between text and images and between sketches and images. In addition, an attention model is used to selectively focus the attention on the different objects of the image, allowing for
arXiv:1804.10819v1
fatcat:dc4o5ocajfddfm6652puqy2ize