TY - GEN
T1 - Context-aware RNNLM Rescoring for Conversational Speech Recognition
AU - Wei, Kun
AU - Guo, Pengcheng
AU - Lv, Hang
AU - Tu, Zhen
AU - Xie, Lei
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/1/24
Y1 - 2021/1/24
N2 - Conversational speech recognition is regarded as a challenging task due to its free-style speaking and long-term contextual dependencies. Prior work has explored the modeling of long-range context through RNNLM rescoring with improved performance. To further take advantage of the persisted nature during a conversation, such as topics or speaker turn, we extend the rescoring procedure to a new context-aware manner. For RNNLM training, we capture the contextual dependencies by concatenating adjacent sentences with various tag words, such as speaker or intention information. For lattice rescoring, the lattice of adjacent sentences are also connected with the first-pass decoded result by tag words. Besides, we also adopt a selective concatenation strategy based on tf-idf, making the best use of contextual similarity to improve transcription performance. Results on four different conversation test sets show that our approach yields up to 13.1% and 6% relative char-error-rate (CER) reduction compared with 1st-pass decoding and common lattice-rescoring, respectively. Index Terms: conversational speech recognition, recurrent neural network language model, lattice-rescoring.
AB - Conversational speech recognition is regarded as a challenging task due to its free-style speaking and long-term contextual dependencies. Prior work has explored the modeling of long-range context through RNNLM rescoring with improved performance. To further take advantage of the persisted nature during a conversation, such as topics or speaker turn, we extend the rescoring procedure to a new context-aware manner. For RNNLM training, we capture the contextual dependencies by concatenating adjacent sentences with various tag words, such as speaker or intention information. For lattice rescoring, the lattice of adjacent sentences are also connected with the first-pass decoded result by tag words. Besides, we also adopt a selective concatenation strategy based on tf-idf, making the best use of contextual similarity to improve transcription performance. Results on four different conversation test sets show that our approach yields up to 13.1% and 6% relative char-error-rate (CER) reduction compared with 1st-pass decoding and common lattice-rescoring, respectively. Index Terms: conversational speech recognition, recurrent neural network language model, lattice-rescoring.
UR - http://www.scopus.com/inward/record.url?scp=85102563795&partnerID=8YFLogxK
U2 - 10.1109/ISCSLP49672.2021.9362109
DO - 10.1109/ISCSLP49672.2021.9362109
M3 - 会议稿件
AN - SCOPUS:85102563795
T3 - 2021 12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021
BT - 2021 12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 12th International Symposium on Chinese Spoken Language Processing, ISCSLP 2021
Y2 - 24 January 2021 through 27 January 2021
ER -