Improving data augmentation for low resource speech-to-text translation with diverse paraphrasing

Chenggang Mi, Lei Xie, Yanning Zhang

Research output: Contribution to journalArticlepeer-review

26 Scopus citations

Abstract

High quality end-to-end speech translation model relies on a large scale of speech-to-text training data, which is usually scarce or even unavailable for some low-resource language pairs. To overcome this, we propose a target-side data augmentation method for low-resource language speech translation. In particular, we first generate large-scale target-side paraphrases based on a paraphrase generation model which incorporates several statistical machine translation (SMT) features and the commonly used recurrent neural network (RNN) feature. Then, a filtering model which consists of semantic similarity and speech–word pair co-occurrence was proposed to select the highest scoring source speech–target paraphrase pairs from candidates. Experimental results on English, Arabic, German, Latvian, Estonian, Slovenian and Swedish paraphrase generation show that the proposed method achieves significant and consistent improvements over several strong baseline models on PPDB datasets (http://paraphrase.org/). To introduce the results of paraphrase generation into the low-resource speech translation, we propose two strategies: audio–text pairs recombination and multiple references training. Experimental results show that the speech translation models trained on new audio–text datasets which combines the paraphrase generation results lead to substantial improvements over baselines, especially on low-resource languages.

Original languageEnglish
Pages (from-to)194-205
Number of pages12
JournalNeural Networks
Volume148
DOIs
StatePublished - Apr 2022

Keywords

  • Data augmentation
  • Paraphrasing
  • Speech translation

Fingerprint

Dive into the research topics of 'Improving data augmentation for low resource speech-to-text translation with diverse paraphrasing'. Together they form a unique fingerprint.

Cite this