Boosted subunits: A framework for recognising sign language from videos

Junwei Han, George Awad, Alistair Sutherland

Research output: Contribution to journalEditorial

18 Scopus citations

Abstract

This study addresses the problem of vision-based sign language recognition, which is to translate signs to English. The authors propose a fully automatic system that starts with breaking up signs into manageable subunits. A variety of spatiotemporal descriptors are extracted to form a feature vector for each subunit. Based on the obtained features, subunits are clustered to yield codebooks. A boosting algorithm is then applied to learn a subset of weak classifiers representing discriminative combinations of features and subunits, and to combine them into a strong classifier for each sign. A joint learning strategy is also adopted to share subunits across sign classes, which leads to a more efficient classification. Experimental results on real-world hand gesture videos demonstrate the proposed approach is promising to build an effective and scalable system.

Original languageEnglish
Pages (from-to)70-80
Number of pages11
JournalIET Image Processing
Volume7
Issue number1
DOIs
StatePublished - 2013

Fingerprint

Dive into the research topics of 'Boosted subunits: A framework for recognising sign language from videos'. Together they form a unique fingerprint.

Cite this