Event analysis based on multiple video sensors for cooperative environment perception

Tian Wang, Jie Chen, Aichun Zhu, Hichem Snoussi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Safety is considered as one of the most crucial aspects in the modern transportation domain. In this paper, we benefit from the videos captured by multiple external video sensors from infrastructure, and propose an algorithm to perceive the environment via these data from different aspects. The algorithm consists of two parts: the descriptor for representing the event and the classification method for analyzing the scenes. The covariance matrix feature descriptor is proposed to fuse the optical flow and the intensity of the image, and the nonlinear one-class SVM with a multi-kernel strategy is used to detect the unusual events in the scene. The method is applied to analyze events in the video surveillance dataset with promising results obtained.

Original languageEnglish
Title of host publicationProceedings of 2015 IEEE International Conference on Progress in Informatics and Computing, PIC 2015
EditorsLiang Xiao, Yinglin Wang
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages438-442
Number of pages5
ISBN (Electronic)9781467380867
DOIs
StatePublished - 10 Jun 2016
Event3rd IEEE International Conference on Progress in Informatics and Computing, PIC 2015 - Nanjing, China
Duration: 18 Dec 201520 Dec 2015

Publication series

NameProceedings of 2015 IEEE International Conference on Progress in Informatics and Computing, PIC 2015

Conference

Conference3rd IEEE International Conference on Progress in Informatics and Computing, PIC 2015
Country/TerritoryChina
CityNanjing
Period18/12/1520/12/15

Keywords

  • cooperative environment perception
  • event analysis
  • multiple video sensors

Fingerprint

Dive into the research topics of 'Event analysis based on multiple video sensors for cooperative environment perception'. Together they form a unique fingerprint.

Cite this