Style-aware two-stage learning framework for video captioning

Yunchuan Ma, Zheng Zhu, Yuankai Qi, Amin Beheshti, Ying Li, Laiyun Qing, Guorong Li

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Significant progress has been made in video captioning in recent years. However, most existing methods directly learn from all given captions without distinguishing the styles of captions. The large diversity in these captions might bring ambiguity to the model learning. To address this issue, we propose a style-aware two-stage learning framework. In the first stage, the model is trained with captions of separate styles, including length style (short, medium, long), action style (single action or multiple actions), and object style (one object or more). For efficiency, a shared model with multiple individual style vectors is learned. In the second stage, a video style encoder is devised to capture style information from the input video, and it outputs a guidance signal of how to utilize the style vectors for the final caption generation. Without whistles and bells, our method achieves state-of-the-art performance on three widely-used public datasets, MSVD, MSR-VTT and VATEX. The source code and trained models will be made available to the public.

Original languageEnglish
Article number112258
JournalKnowledge-Based Systems
Volume301
DOIs
StatePublished - 9 Oct 2024

Keywords

  • Controllable
  • Style-aware
  • Two-stage learning
  • Video captioning

Fingerprint

Dive into the research topics of 'Style-aware two-stage learning framework for video captioning'. Together they form a unique fingerprint.

Cite this