TY - JOUR
T1 - Towards Rehearsal-Free Multilingual ASR
T2 - 25th Interspeech Conferece 2024
AU - Xu, Tianyi
AU - Huang, Kaixun
AU - Guo, Pengcheng
AU - Zhou, Yu
AU - Huang, Longtao
AU - Xue, Hui
AU - Xie, Lei
N1 - Publisher Copyright:
© 2024 International Speech Communication Association. All rights reserved.
PY - 2024
Y1 - 2024
N2 - Pre-trained multilingual speech foundation models, like Whisper, have shown impressive performance across different languages.However, adapting these models to new or specific languages is computationally extensive and faces catastrophic forgetting problems.Addressing these issues, our study investigates strategies to enhance the model on new languages in the absence of original training data, while also preserving the established performance on the original languages.Specifically, we first compare various LoRA-based methods to find out their vulnerability to forgetting.To mitigate this issue, we propose to leverage the LoRA parameters from the original model for approximate orthogonal gradient descent on the new samples.Additionally, we also introduce a learnable rank coefficient to allocate trainable parameters for more efficient training.Our experiments with a Chinese Whisper model (for Uyghur and Tibetan) yield better results with a more compact parameter set.
AB - Pre-trained multilingual speech foundation models, like Whisper, have shown impressive performance across different languages.However, adapting these models to new or specific languages is computationally extensive and faces catastrophic forgetting problems.Addressing these issues, our study investigates strategies to enhance the model on new languages in the absence of original training data, while also preserving the established performance on the original languages.Specifically, we first compare various LoRA-based methods to find out their vulnerability to forgetting.To mitigate this issue, we propose to leverage the LoRA parameters from the original model for approximate orthogonal gradient descent on the new samples.Additionally, we also introduce a learnable rank coefficient to allocate trainable parameters for more efficient training.Our experiments with a Chinese Whisper model (for Uyghur and Tibetan) yield better results with a more compact parameter set.
KW - Automatic Speech Recognition
KW - Continual learning
KW - Orthogonal gradient
KW - Parameter-efficient tuning
UR - http://www.scopus.com/inward/record.url?scp=85214811152&partnerID=8YFLogxK
U2 - 10.21437/Interspeech.2024-1953
DO - 10.21437/Interspeech.2024-1953
M3 - 会议文章
AN - SCOPUS:85214811152
SN - 2308-457X
SP - 2534
EP - 2538
JO - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
JF - Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Y2 - 1 September 2024 through 5 September 2024
ER -