Multi-view pedestrian captioning with an attention topic CNN model
Computers in industry (Print)
Image captioning is a fundamental task connecting computer vision and natural language processing. Recent researches usually concentrate on generic image captioning or video captioning among thousands of classes. However, they fail to cover detailed semantics and cannot effectively deal with a specific class of objects, such as pedestrian. Pedestrian captioning plays a critical role for analysis, identification and retrieval in massive collections of video data. Therefore, in this paper, we
... ose a novel approach to generate multi-view captions for pedestrian images with a topic attention mechanism on global and local semantic regions. Firstly, we detect different local parts of pedestrian and utilize a deep convolutional neural network (CNN) to extract a series of features from these local regions and the whole image. Then, we aggregate these features with a topic attention CNN model to produce a representative vector richly expressing the image from a different view at each time step. This feature vector is taken as input to a hierarchical recurrent neural network to generate multi-view captions for pedestrian images. Finally, a new dataset named CASIA_Pedestrian including 5000 pedestrian images and sentences pairs is collected to evaluate the performance of pedestrian captioning. Experiments and comparison results show the superiority of our proposed approach.