Abstract
This study addresses the problem of vision-based sign language recognition, which is to translate signs to English. The authors propose a fully automatic system that starts with breaking up signs into manageable subunits. A variety of spatiotemporal descriptors are extracted to form a feature vector for each subunit. Based on the obtained features, subunits are clustered to yield codebooks. A boosting algorithm is then applied to learn a subset of weak classifiers representing discriminative combinations of features and subunits, and to combine them into a strong classifier for each sign. A joint learning strategy is also adopted to share subunits across sign classes, which leads to a more efficient classification. Experimental results on real-world hand gesture videos demonstrate the proposed approach is promising to build an effective and scalable system.
| Original language | English |
|---|---|
| Pages (from-to) | 70-80 |
| Number of pages | 11 |
| Journal | IET Image Processing |
| Volume | 7 |
| Issue number | 1 |
| DOIs | |
| State | Published - 2013 |
Fingerprint
Dive into the research topics of 'Boosted subunits: A framework for recognising sign language from videos'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver