Action recognition by joint learning

Yuan Yuan, Lei Qi, Xiaoqiang Lu

科研成果: 期刊稿件文章同行评审

16 引用 (Scopus)

摘要

Due to the promising applications including video surveillance, video annotation, and interaction gaming, human action recognition from videos has attracted much research interest. Although various works have been proposed for human action recognition, there still exist many challenges such as illumination condition, viewpoint, camera motion and cluttered background. Extracting discriminative representation is one of the main ways to handle these challenges. In this paper, we propose a novel action recognition method that simultaneously learns middle-level representation and classifier by jointly training a multinomial logistic regression (MLR) model and a discriminative dictionary. In the proposed method, sparse code of low-level representation, conducting as latent variables of MLR, can capture the structure of low-level feature and thus is more discriminate. Meanwhile, the training of dictionary and MLR model are integrated into one objective function for considering the information of categories. By optimizing this objective function, we can learn a discriminative dictionary modulated by MLR and a MLR model driven by sparse coding. The proposed method is evaluated on YouTube action dataset and HMDB51 dataset. Experimental results demonstrate that our method is comparable with mainstream methods.

源语言英语
页(从-至)77-85
页数9
期刊Image and Vision Computing
55
DOI
出版状态已出版 - 1 11月 2016
已对外发布

指纹

探究 'Action recognition by joint learning' 的科研主题。它们共同构成独一无二的指纹。

引用此