Towards Rehearsal-Free Multilingual ASR: A LoRA-based Case Study on Whisper

Tianyi Xu, Kaixun Huang, Pengcheng Guo, Yu Zhou, Longtao Huang, Hui Xue, Lei Xie

Research output: Contribution to journalConference articlepeer-review

Abstract

Pre-trained multilingual speech foundation models, like Whisper, have shown impressive performance across different languages.However, adapting these models to new or specific languages is computationally extensive and faces catastrophic forgetting problems.Addressing these issues, our study investigates strategies to enhance the model on new languages in the absence of original training data, while also preserving the established performance on the original languages.Specifically, we first compare various LoRA-based methods to find out their vulnerability to forgetting.To mitigate this issue, we propose to leverage the LoRA parameters from the original model for approximate orthogonal gradient descent on the new samples.Additionally, we also introduce a learnable rank coefficient to allocate trainable parameters for more efficient training.Our experiments with a Chinese Whisper model (for Uyghur and Tibetan) yield better results with a more compact parameter set.

Original languageEnglish
Pages (from-to)2534-2538
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
DOIs
StatePublished - 2024
Event25th Interspeech Conferece 2024 - Kos Island, Greece
Duration: 1 Sep 20245 Sep 2024

Keywords

  • Automatic Speech Recognition
  • Continual learning
  • Orthogonal gradient
  • Parameter-efficient tuning

Fingerprint

Dive into the research topics of 'Towards Rehearsal-Free Multilingual ASR: A LoRA-based Case Study on Whisper'. Together they form a unique fingerprint.

Cite this