A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2015; you can also visit the original URL.
The file type is application/pdf
.
A novel fusion method for integrating multiple modalities and knowledge for multimodal location estimation
2013
Proceedings of the 2nd ACM international workshop on Geotagging and its applications in multimedia - GeoMM '13
This article describes a novel fusion approach using multiple modalities and knowledge sources that improves the accuracy of multimodal location estimation algorithms. The problem of "multimodal location estimation" or "placing" involves associating geo-locations with consumer-produced multimedia data like videos or photos that have not been tagged using GPS. Our algorithm effectively integrates data from the visual and textual modalities with external geographical knowledge bases by building a
doi:10.1145/2509230.2509238
dblp:conf/mm/KelmSCFERS13
fatcat:rbjyo4xkonauxjtirvuxor5xgm