Adversarial regularization for attention based end-to-end robust speech recognition

Sining Sun, Pengcheng Guo, Lei Xie, Mei Yuh Hwang

科研成果: 期刊稿件文章同行评审

26 引用 (Scopus)

摘要

End-to-end speech recognition, such as attention based approaches, is an emerging and attractive topic in recent years. It has achieved comparable performance with the traditional speech recognition framework. Because end-to-end approaches integrate acoustic and linguistic information into one model, the perturbation in the acoustic level such as acoustic noise, could be easily propagated to the linguistic level. Thus improving model robustness in real application environments for these end-to-end systems is crucial. In this paper, in order to make the attention based end-to-end model more robust against noises, we formulate regulation of the objective function with adversarial training examples. Particularly two adversarial regularization techniques, the fast gradient-sign method and the local distributional smoothness method, are explored to improve noise robustness. Experiments on two publicly available Chinese Mandarin corpora, AISHELL-1 and AISHELL-2, show that adversarial regularization is an effective approach to improve robustness against noises for our attention-based models. Specifically, we obtained 18.4 relative character error rate CER reduction on the AISHELL-1 noisy test set. Even on the clean test set, we showed 16.7 relative improvement. As the training set increases and covers more environmental varieties, our proposed methods remain effective despite that the improvement shrinks. Training on the large AISHELL-2 training corpus and testing on the various AISHELL-2 test sets, we achieved 7.0-12.2 relative error rate reduction. To our knowledge, this is the first successful application of adversarial regularization to sequence-to-sequence speech recognition systems.

源语言英语
文章编号3370726
页(从-至)1826-1838
页数13
期刊IEEE/ACM Transactions on Audio Speech and Language Processing
27
11
DOI
出版状态已出版 - 11月 2019

指纹

探究 'Adversarial regularization for attention based end-to-end robust speech recognition' 的科研主题。它们共同构成独一无二的指纹。

引用此