A Gesture Recognition Approach Using Multimodal Neural Network

Xiaoyu Song, Hong Chen, Qing Wang

Research output: Contribution to journalConference articlepeer-review

1 Scopus citations

Abstract

Gesture recognition based on visual modal often encounters the problem of reduced recognition rate in some extreme environments such as in a dim or near-skinned background. When human beings make judgments, they will integrate various modal information. There should also be some connections between human gestures and speech. Based on this, we propose a multimodal gesture recognition network. We use 3D CNN to extract visual features, GRU to extract speech features, and fuse them at late stage to make the final judgment. At the same time, we use a two-stage structure, a shallow network as detector and a deep network as classifier to reduce the memory usage and energy consumption. We make a gesture dataset recorded in a dim environment, named DarkGesture. In this dataset, people say the gesture's name when they make a gesture. Then, the network proposed in this paper is compared with the single-modal recognition network based on DarkGesture. The results show that the multi-modal recognition network proposed in this paper has better recognition effect.

Original languageEnglish
Article number012127
JournalJournal of Physics: Conference Series
Volume1544
Issue number1
DOIs
StatePublished - 2 Jun 2020
Externally publishedYes
Event2020 5th International Conference on Intelligent Computing and Signal Processing, ICSP 2020 - Suzhou, China
Duration: 20 Mar 202022 Mar 2020

Fingerprint

Dive into the research topics of 'A Gesture Recognition Approach Using Multimodal Neural Network'. Together they form a unique fingerprint.

Cite this