Harnessing lab knowledge for real-world action recognition

Zhigang Ma, Yi Yang, Feiping Nie, Nicu Sebe, Shuicheng Yan, Alexander G. Hauptmann

科研成果: 期刊稿件文章同行评审

36 引用 (Scopus)

摘要

Much research on human action recognition has been oriented toward the performance gain on lab-collected datasets. Yet real-world videos are more diverse, with more complicated actions and often only a few of them are precisely labeled. Thus, recognizing actions from these videos is a tough mission. The paucity of labeled real-world videos motivates us to "borrow" strength from other resources. Specifically, considering that many lab datasets are available, we propose to harness lab datasets to facilitate the action recognition in real-world videos given that the lab and real-world datasets are related. As their action categories are usually inconsistent, we design a multi-task learning framework to jointly optimize the classifiers for both sides. The general Schatten $$p$ $ p -norm is exerted on the two classifiers to explore the shared knowledge between them. In this way, our framework is able to mine the shared knowledge between two datasets even if the two have different action categories, which is a major virtue of our method. The shared knowledge is further used to improve the action recognition in the real-world videos. Extensive experiments are performed on real-world datasets with promising results.

源语言英语
页(从-至)60-73
页数14
期刊International Journal of Computer Vision
109
1-2
DOI
出版状态已出版 - 8月 2014
已对外发布

指纹

探究 'Harnessing lab knowledge for real-world action recognition' 的科研主题。它们共同构成独一无二的指纹。

引用此