Robust video summarization using collaborative representation of adjacent frames

Mingyang Ma, Shaohui Mei, Shuai Wan, Zhiyong Wang, David Dagan Feng

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

With the ever increasing volume of video content, efficient and effective video summarization (VS) techniques are urgently demanded to manage a large amount of video data. Recent developments on sparse representation based approaches have demonstrated promising results for VS. However, these existing approaches treat each frame independently, so the performance can be greatly influenced by each individual frame. In this paper, we formulate the VS problem with a collaborative representation model to take the visual similarity of adjacent frames into consideration. To be specific, during the procedure of reconstruction, both each individual frame and their adjacent frames are reconstructed collaboratively, so the impact of an individual frame can be weakened. In addition, a greedy iterative algorithm is designed for model optimization, where the sparsity and the average percentage of reconstruction (APOR) are adopted to control the iteration. Experimental results on two benchmark datasets with various types of videos demonstrate that the proposed method not only outperforms the state of the art, but also improves the robustness to transitional frames and “outlier” frames.

Original languageEnglish
Pages (from-to)28985-29005
Number of pages21
JournalMultimedia Tools and Applications
Volume78
Issue number20
DOIs
StatePublished - 1 Oct 2019

Keywords

  • Dictionary selection
  • Robustness
  • Sparse representation
  • Video summarization

Fingerprint

Dive into the research topics of 'Robust video summarization using collaborative representation of adjacent frames'. Together they form a unique fingerprint.

Cite this