A novel fusion method for integrating multiple modalities and knowledge for multimodal location estimation

Pascal Kelm, Sebastian Schmiedeke, Jaeyoung Choi, Gerald Friedland, Venkatesan Nallampatti Ekambaram, Kannan Ramchandran, Thomas Sikora
2013 Proceedings of the 2nd ACM international workshop on Geotagging and its applications in multimedia - GeoMM '13  
This article describes a novel fusion approach using multiple modalities and knowledge sources that improves the accuracy of multimodal location estimation algorithms. The problem of "multimodal location estimation" or "placing" involves associating geo-locations with consumer-produced multimedia data like videos or photos that have not been tagged using GPS. Our algorithm effectively integrates data from the visual and textual modalities with external geographical knowledge bases by building a
more » ... hierarchical model that combines data-driven and semantic methods to group visual and textual features together within geographical regions. We evaluate our algorithm on the MediaEval 2010 Placing Task dataset and show that our system significantly outperforms other state-of-the-art approaches, successfully locating about 40 % of the videos to within a radius of 100 m.
doi:10.1145/2509230.2509238 dblp:conf/mm/KelmSCFERS13 fatcat:rbjyo4xkonauxjtirvuxor5xgm