PCC Net: Perspective crowd counting via spatial convolutional network

Junyu Gao, Qi Wang, Xuelong Li

科研成果: 期刊稿件文章同行评审

201 引用 (Scopus)

摘要

Crowd counting from a single image is a challenging task due to high appearance similarity, perspective changes, and severe congestion. Many methods only focus on the local appearance features and they cannot handle the aforementioned challenges. In order to tackle them, we propose a perspective crowd counting network (PCC Net), which consists of three parts: 1) density map estimation (DME) focuses on learning very local features of density map estimation; 2) random high-level density classification (R-HDC) extracts global features to predict the coarse density labels of random patches in images; and 3) fore-/background segmentation (FBS) encodes mid-level features to segments the foreground and background. Besides, the Down, Up, Left, and Right (DULR) module is embedded in PCC Net to encode the perspective changes on four directions (DULR). The proposed PCC Net is verified on five mainstream datasets, which achieves the state-of-the-art performance on the one and attains the competitive results on the other four datasets. The source code is available at https://github.com/gjy3035/PCC-Net.

源语言英语
文章编号8723079
页(从-至)3486-3498
页数13
期刊IEEE Transactions on Circuits and Systems for Video Technology
30
10
DOI
出版状态已出版 - 10月 2020

指纹

探究 'PCC Net: Perspective crowd counting via spatial convolutional network' 的科研主题。它们共同构成独一无二的指纹。

引用此