Spatiotemporal modeling for video summarization using convolutional recurrent neural network

Yuan Yuan, Haopeng Li, Qi Wang

Research output: Contribution to journalArticlepeer-review

41 Scopus citations

Abstract

In this paper, a novel neural network named CRSum for the video summarization task is proposed. The proposed network integrates feature extraction, temporal modeling, and summary generation into an end-to-end architecture. Compared with previous work on this task, the proposed method owns three distinctive characteristics: 1) it for the first time leverages convolutional recurrent neural network for simultaneously modeling spatial and temporal structure of video for summarization; 2) thorough and delicate features of video are obtained in the proposed architecture by trainable three-dimension convolutional neural networks and feature fusion; and 3) a new loss function named Sobolev loss is defined, aiming to constrain the derivative of sequential data and exploit potential temporal structure of video. A series of experiments are conducted to prove the effectiveness of the proposed method. We further analyze our method from different aspects by well-designed experiments.

Original languageEnglish
Article number8715406
Pages (from-to)64676-64685
Number of pages10
JournalIEEE Access
Volume7
DOIs
StatePublished - 2019

Keywords

  • CRNN
  • CRSum
  • Sobolev loss
  • spatiotemporal modeling
  • video summarization

Fingerprint

Dive into the research topics of 'Spatiotemporal modeling for video summarization using convolutional recurrent neural network'. Together they form a unique fingerprint.

Cite this