Combining scene model and fusion for night video enhancement

Jing Li, Tao Yang, Quan Pan, Yongmei Cheng

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

This paper presents a video context enhancement method for night surveillance. The basic idea is to extract and fuse the meaningful information of video sequence captured from a fixed camera under different illuminations. A unique characteristic of the algorithm is to separate the image context into two classes and estimate them in different ways. One class contains basic surrounding scene information and scene model, which is obtained via background modeling and object tracking in daytime video sequence. The other class is extracted from nighttime video, including frequently moving region, high illumination region and high gradient region. The scene model and pixel-wise difference method are used to segment the three regions. A shift-invariant discrete wavelet based image fusion technique is used to integral all those context information in the final result. Experiment results demonstrate that the proposed approach can provide much more details and meaningful information for nighttime video.

Original languageEnglish
Pages (from-to)88-93
Number of pages6
JournalJournal of Electronics
Volume26
Issue number1
DOIs
StatePublished - Jan 2009

Keywords

  • Background modeling
  • Image fusion
  • Night video enhancement
  • Object tracking

Fingerprint

Dive into the research topics of 'Combining scene model and fusion for night video enhancement'. Together they form a unique fingerprint.

Cite this