Images Don't Lie: Transferring Deep Visual Semantic Features to Large-Scale Multimodal Learning to Rank [article]

Corey Lynch, Kamelia Aryafar, Josh Attenberg
2015 arXiv   pre-print
Search is at the heart of modern e-commerce. As a result, the task of ranking search results automatically (learning to rank) is a multibillion dollar machine learning problem. Traditional models optimize over a few hand-constructed features based on the item's text. In this paper, we introduce a multimodal learning to rank model that combines these traditional features with visual semantic features transferred from a deep convolutional neural network. In a large scale experiment using data
more » ... the online marketplace Etsy, we verify that moving to a multimodal representation significantly improves ranking quality. We show how image features can capture fine-grained style information not available in a text-only representation. In addition, we show concrete examples of how image information can successfully disentangle pairs of highly different items that are ranked similarly by a text-only model.
arXiv:1511.06746v1 fatcat:dhwzfckyb5e67f7qxmzsty7tee