A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Learning to Generate Descriptions of Visual Data Anchored in Spatial Relations
2017
IEEE Computational Intelligence Magazine
The explosive growth of visual data both online and offline in private and public repositories has led to urgent requirements for better ways to index, search, retrieve, process and manage visual content. Automatic methods for generating image descriptions can help with all these tasks as well as playing an important role in assistive technology for the visually impaired. The task we address in this paper is the automatic generation of image descriptions that are anchored in spatial relations.
doi:10.1109/mci.2017.2708559
fatcat:lzqsen7l2ndrnfgvl7vd5arl4a