Quantifying and Detecting Collective Motion in Crowd Scenes

Xuelong Li, Mulin Chen, Qi Wang

Research output: Contribution to journalArticlepeer-review

36 Scopus citations

Abstract

People in crowd scenes always exhibit consistent behaviors and form collective motions. The analysis of collective motion has motivated a surge of interest in computer vision. Nevertheless, the effort is hampered by the complex nature of collective motions. Considering the fact that collective motions are formed by individuals, this paper proposes a new framework for both quantifying and detecting collective motion by investigating the spatio-temporal behavior of individuals. The main contributions of this work are threefold: 1) an intention-aware model is built to fully capture the intrinsic dynamics of individuals; 2) a structure-based collectiveness measurement is developed to accurately quantify the collective properties of crowds; 3) a multi-stage clustering strategy is formulated to detect both the local and global behavior consistency in crowd scenes. Experiments on real world data sets show that our method is able to handle crowds with various structures and time-varying dynamics. Especially, the proposed method shows nearly 10% improvement over the competitors in terms of NMI, Purity and RI. Its applicability is illustrated in the context of anomaly detection and semantic scene segmentation.

Original languageEnglish
Article number9062523
Pages (from-to)5571-5583
Number of pages13
JournalIEEE Transactions on Image Processing
Volume29
DOIs
StatePublished - 2020

Keywords

  • Clustering
  • Collectiveness
  • Crowd analysis
  • Group detection
  • Manifold learning

Fingerprint

Dive into the research topics of 'Quantifying and Detecting Collective Motion in Crowd Scenes'. Together they form a unique fingerprint.

Cite this