Visual-semantic network: a visual and semantic enhanced model for gesture recognition

Yizhe Wang, Congqi Cao, Yanning Zhang

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Gesture recognition has attracted considerable attention and made encouraging progress in recent years due to its great potential in applications. However, the spatial and temporal modeling in gesture recognition is still a problem to be solved. Specifically, existing works lack efficient temporal modeling and effective spatial attention capacity. To efficiently model temporal information, we first propose a long- and short-term temporal shift module (LS-TSM) that models the long-term and short-term temporal information simultaneously. Then, we propose a spatial attention module (SAM) that focuses on where the change primarily occurs to obtain effective spatial attention capacity. In addition, the semantic relationship among gestures is helpful in gesture recognition. However, this is usually neglected by previous works. Therefore, we propose a label relation module (LRM) that takes full advantage of the relationship among classes based on their labels’ semantic information. To explore the best form of LRM, we design four different semantic reconstruction methods to incorporate the semantic relationship information into the class label’s semantic space. We perform extensive ablation studies to analyze the best settings of each module. The best form of LRM is utilized to build our visual-semantic network (VS Network), which achieves the state-of-the-art performance on two gesture datasets, i.e., EgoGesture and NVGesture.

Original languageEnglish
Article number25
JournalVisual Intelligence
Volume1
Issue number1
DOIs
StatePublished - Dec 2023

Keywords

  • Gesture recognition
  • Semantic relationship
  • Spatial attention
  • Temporal modeling

Fingerprint

Dive into the research topics of 'Visual-semantic network: a visual and semantic enhanced model for gesture recognition'. Together they form a unique fingerprint.

Cite this