Attention-based end-to-end models for small-footprint keyword spotting

Changhao Shan, Junbo Zhang, Yujun Wang, Lei Xie

Research output: Contribution to journalConference articlepeer-review

59 Scopus citations

Abstract

In this paper, we propose an attention-based end-to-end neural approach for small-footprint keyword spotting (KWS), which aims to simplify the pipelines of building a production-quality KWS system. Our model consists of an encoder and an attention mechanism. Using RNNs, the encoder transforms the input signal into a high level representation. Then the attention mechanism weights the encoder features and generates a fixed-length vector. Finally, by linear transformation and softmax function, the vector becomes a score used for keyword detection. We also evaluate the performance of different encoder architectures, including LSTM, GRU and CRNN. Experiments on wake-up data show that our approach outperforms the recent Deep KWS approach [9] by a large margin and the best performance is achieved by CRNN. To be more specific, with ∼84K parameters, our attention-based model achieves 1.02% false rejection rate (FRR) at 1.0 false alarm (FA) per hour.

Original languageEnglish
Pages (from-to)2037-2041
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2018-September
DOIs
StatePublished - 2018
Event19th Annual Conference of the International Speech Communication, INTERSPEECH 2018 - Hyderabad, India
Duration: 2 Sep 20186 Sep 2018

Keywords

  • Attention-based model
  • Convolutional neural networks
  • End-to-end keyword spotting
  • Recurrent neural networks

Fingerprint

Dive into the research topics of 'Attention-based end-to-end models for small-footprint keyword spotting'. Together they form a unique fingerprint.

Cite this