LET-NLM-Decoder: A WFST-based asynchronous lazy-evaluation token-group decoder for first-pass neural language model decoding

Fangyi Li, Hang Lv, Yiming Wang, Lei Xie

Research output: Contribution to journalArticlepeer-review

Abstract

Neural language models (NLMs) have been shown to outperform n-gram language models in automatic speech recognition (ASR) tasks. NLMs are usually used in the second-pass lattice rescoring rather than the first-pass decoding, since its encoded infinite history virtually cannot be compiled into static decoding graphs. However, the modeling power of NLMs is not fully leveraged due to the constraints imposed by the lattice, leading to accuracy loss. To improve this, on-the-fly composition decoders were proposed to utilize NLMs in first-pass decoding with increased computational cost. In this paper, an asynchronous lazy-evaluation token-group decoder with exact lattice generation is proposed to reduce the computational cost of the on-the-fly composition decoder, achieving significant decoding speedup. More specifically, having a novel token-group with a representative element data structure, the proposed decoder performs lazy-evaluation which expands the tokens until a word boundary is reached. Furthermore, based on the score of the representative element in a token-group, the decoder prunes unpromising tokens by an A* algorithm. The experiments show that the proposed decoder can accelerate the vanilla on-the-fly composition decoder by up to 6.9 times, and get paths with even better average likelihoods than lattice rescoring approaches.

Original languageEnglish
Article numbere70145
JournalElectronics Letters
Volume61
Issue number1
DOIs
StatePublished - 1 Jan 2025

Keywords

  • speech
  • speech processing
  • speech recognition

Fingerprint

Dive into the research topics of 'LET-NLM-Decoder: A WFST-based asynchronous lazy-evaluation token-group decoder for first-pass neural language model decoding'. Together they form a unique fingerprint.

Cite this