Weakly Supervised Data Refinement and Flexible Sequence Compression for Efficient Thai LLM-based ASR

  • Mingchen Shao
  • , Xinfa Zhu
  • , Chengyou Wang
  • , Bingshen Mu
  • , Hai Li
  • , Ying Yan
  • , Junhui Liu
  • , Danming Xie
  • , Lei Xie

Research output: Contribution to journalConference articlepeer-review

Abstract

Despite remarkable achievements, automatic speech recognition (ASR) in low-resource scenarios still faces two challenges: high-quality data scarcity and high computational demands. This paper proposes EThai-ASR, the first to apply large language models (LLMs) to Thai ASR and create an efficient LLM-based ASR system. EThai-ASR comprises a speech encoder, a connection module and a Thai LLM decoder. To address the data scarcity and obtain a powerful speech encoder, EThai-ASR introduces a self-evolving data refinement strategy to refine weak labels, yielding an enhanced speech encoder. Moreover, we propose a pluggable sequence compression module used in the connection module with three modes designed to reduce the sequence length, thus decreasing computational demands while maintaining decent performance. Extensive experiments demonstrate that EThai-ASR has achieved state-of-the-art accuracy in multiple datasets. We release our refined text transcripts to promote further research.

Original languageEnglish
Pages (from-to)748-752
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
StatePublished - 2025
Event26th Interspeech Conference 2025 - Rotterdam, Netherlands
Duration: 17 Aug 202521 Aug 2025

Keywords

  • data refinement
  • low-resource scenarios
  • pluggable sequence compression

Fingerprint

Dive into the research topics of 'Weakly Supervised Data Refinement and Flexible Sequence Compression for Efficient Thai LLM-based ASR'. Together they form a unique fingerprint.

Cite this