CAM-RNN: Co-Attention Model Based RNN for Video Captioning

Bin Zhao, Xuelong Li, Xiaoqiang Lu

Research output: Contribution to journalArticlepeer-review

127 Scopus citations

Abstract

Video captioning is a technique that bridges vision and language together, for which both visual information and text information are quite important. Typical approaches are based on the recurrent neural network (RNN), where the video caption is generated word by word, and the current word is predicted based on the visual content and previously generated words. However, in the prediction of the current word, there is much uncorrelated visual content, and some of the previously generated words provide little information, which may cause interference in generating a correct caption. Based on this point, we attempt to exploit the visual and text features that are most correlated with the caption. In this paper, a co-attention model based recurrent neural network (CAM-RNN) is proposed, where the CAM is utilized to encode the visual and text features, and the RNN works as the decoder to generate the video caption. Specifically, the CAM is composed of a visual attention module, a text attention module, and a balancing gate. During the generation procedure, the visual attention module is able to adaptively attend to the salient regions in each frame and the frames most correlated with the caption. The text attention module can automatically focus on the most relevant previously generated words or phrases. Moreover, between the two attention modules, a balancing gate is designed to regulate the influence of visual features and text features when generating the caption. In practice, the extensive experiments are conducted on four popular datasets, including MSVD, Charades, MSR-VTT, and MPII-MD, which have demonstrated the effectiveness of the proposed approach.

Original languageEnglish
Pages (from-to)5552-5565
Number of pages14
JournalIEEE Transactions on Image Processing
Volume28
Issue number11
DOIs
StatePublished - 1 Nov 2019

Fingerprint

Dive into the research topics of 'CAM-RNN: Co-Attention Model Based RNN for Video Captioning'. Together they form a unique fingerprint.

Cite this