Abstract
Human activity recognition (HAR) plays an irreplaceable role in various applications and has been a prosperous research topic for years. Recent studies show significant progress in feature extraction (i.e., data representation) using deep learning techniques. However, they face significant challenges in capturing multi-modal spatial-temporal patterns from the sensory data, and they commonly overlook the variants between subjects. We propose a Discriminative Adversarial MUlti-view Network (DAMUN) to address the above issues in sensor-based HAR. We first design a multi-view feature extractor to obtain representations of sensory data streams from temporal, spatial, and spatio-temporal views using convolutional networks. Then, we fuse the multi-view representations into a robust joint representation through a trainable Hadamard fusion module, and finally employ a Siamese adversarial network architecture to decrease the variants between the representations of different subjects. We have conducted extensive experiments under an iterative left-one-subject-out setting on three real-world datasets and demonstrated both the effectiveness and robustness of our approach.
| Original language | English |
|---|---|
| Article number | 42 |
| Journal | Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies |
| Volume | 4 |
| Issue number | 2 |
| DOIs | |
| State | Published - 15 Jun 2020 |
Keywords
- Activity Recognition
- Adversarial Training
- Deep Learning
- Multi-view Representation
Fingerprint
Dive into the research topics of 'Adversarial Multi-view Networks for Activity Recognition'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver