Action recognition by joint learning

Yuan Yuan, Lei Qi, Xiaoqiang Lu

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Due to the promising applications including video surveillance, video annotation, and interaction gaming, human action recognition from videos has attracted much research interest. Although various works have been proposed for human action recognition, there still exist many challenges such as illumination condition, viewpoint, camera motion and cluttered background. Extracting discriminative representation is one of the main ways to handle these challenges. In this paper, we propose a novel action recognition method that simultaneously learns middle-level representation and classifier by jointly training a multinomial logistic regression (MLR) model and a discriminative dictionary. In the proposed method, sparse code of low-level representation, conducting as latent variables of MLR, can capture the structure of low-level feature and thus is more discriminate. Meanwhile, the training of dictionary and MLR model are integrated into one objective function for considering the information of categories. By optimizing this objective function, we can learn a discriminative dictionary modulated by MLR and a MLR model driven by sparse coding. The proposed method is evaluated on YouTube action dataset and HMDB51 dataset. Experimental results demonstrate that our method is comparable with mainstream methods.

Original languageEnglish
Pages (from-to)77-85
Number of pages9
JournalImage and Vision Computing
Volume55
DOIs
StatePublished - 1 Nov 2016
Externally publishedYes

Keywords

  • Action recognition
  • Computer vision
  • Joint learning
  • Multinomial logistic regression (MLR)
  • Sparse coding

Fingerprint

Dive into the research topics of 'Action recognition by joint learning'. Together they form a unique fingerprint.

Cite this