CapOnImage: Context-driver Dense-Captioning On Image

Yiqi Gao, Xinglin Hou, Yuanmeng Zhang, Tiezheng Ge, Yuning Jiang, Peng Wang

科研成果: 会议稿件论文同行评审

摘要

Existing image captioning systems are dedicated to generating narrative captions for images, which are spatially detached from the image in presentation. However, texts can also be used as decorations on the image to highlight the key points and increase the attractiveness of images. In this work, we introduce a new task called captioning on image (CapOnImage), which aims to generate dense captions at different locations of the image based on contextual information. For this new task, we introduce a large-scale benchmark called CapOnImage2M, which contains 2.1 million product images, each with an average of 4.8 spatially localized captions. To fully exploit the surrounding visual context to generate the most suitable caption for each location, we propose a multi-modal pre-training model with multi-level pre-training tasks that progressively learn the correspondence between texts and image locations from easy to hard. To avoid generating redundant captions for nearby locations, we further enhance the location embedding with neighbor locations. Compared with other image captioning model variants, our model achieves the best results in both captioning accuracy and diversity aspects.

源语言英语
3449-3465
页数17
出版状态已出版 - 2022
活动2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 - Abu Dhabi, 阿拉伯联合酋长国
期限: 7 12月 202211 12月 2022

会议

会议2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022
国家/地区阿拉伯联合酋长国
Abu Dhabi
时期7/12/2211/12/22

指纹

探究 'CapOnImage: Context-driver Dense-Captioning On Image' 的科研主题。它们共同构成独一无二的指纹。

引用此