Autores
Gelbukh Alexander
Título Topic-Based Image Caption Generation
Tipo Revista
Sub-tipo JCR
Descripción Arabian Journal for Science and Engineering
Resumen Image captioning is to generate captions for a given image based on the content of the image. To describe an image efficiently, it requires extracting as much information from it as possible. Apart from detecting the presence of objects and their relative orientation, the respective purpose intending the topic of the image is another vital information which can be incorporated with the model to improve the efficiency of the caption generation system. The sole aim is to put extra thrust on the context of the image imitating human approach, as the mere presence of objects which may not be related to the context representing the image should not be a part of the generated caption. In this work, the focus is on detecting the topic concerning the image so as to guide a novel deep learning-based encoder–decoder framework to generate captions for the image. The method is compared with some of the earlier state-of-the-art models based on the result obtained from MSCOCO 2017 training data set. BLEU, CIDEr, ROGUE-L, METEOR scores are used to measure the efficacy of the model which show improvement in performance of the caption generation process. 
Observaciones DOI 10.1007/s13369-019-04262-2
Lugar Heidelberg
País Alemania
No. de páginas 3025-3034
Vol. / Cap. v. 45 no. 4
Inicio 2020-04-01
Fin
ISBN/ISSN