Adversarial Multi-view Networks for Activity Recognition

  • Lei Bai
  • , Lina Yao
  • , Xianzhi Wang
  • , Salil S. Kanhere
  • , Bin Guo
  • , Zhiwen Yu

Research output: Contribution to journalArticlepeer-review

37 Scopus citations

Abstract

Human activity recognition (HAR) plays an irreplaceable role in various applications and has been a prosperous research topic for years. Recent studies show significant progress in feature extraction (i.e., data representation) using deep learning techniques. However, they face significant challenges in capturing multi-modal spatial-temporal patterns from the sensory data, and they commonly overlook the variants between subjects. We propose a Discriminative Adversarial MUlti-view Network (DAMUN) to address the above issues in sensor-based HAR. We first design a multi-view feature extractor to obtain representations of sensory data streams from temporal, spatial, and spatio-temporal views using convolutional networks. Then, we fuse the multi-view representations into a robust joint representation through a trainable Hadamard fusion module, and finally employ a Siamese adversarial network architecture to decrease the variants between the representations of different subjects. We have conducted extensive experiments under an iterative left-one-subject-out setting on three real-world datasets and demonstrated both the effectiveness and robustness of our approach.

Original languageEnglish
Article number42
JournalProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume4
Issue number2
DOIs
StatePublished - 15 Jun 2020

Keywords

  • Activity Recognition
  • Adversarial Training
  • Deep Learning
  • Multi-view Representation

Fingerprint

Dive into the research topics of 'Adversarial Multi-view Networks for Activity Recognition'. Together they form a unique fingerprint.

Cite this