DBN model based Multi-stream asynchrony triphone for audio-visual speech recognition and phone segmentation

Guo Yun Lu, Dong Mei Jiang, Yang Yu Fan, Rong Chun Zhao, H. Sahli, W. Verhelst

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In this paper, a novel Multi-stream Multi-states Asynchronous Dynamic Bayesian Network based context-dependent TRIphone (MM-ADBN-TRI) model is proposed for audio-visual speech recognition and phone segmentation. The model looses the asynchrony of audio and visual stream to the word level. Both in audio stream and in visual stream, word-triphone-state topology structure is used. Essentially, MM-ADBN-TRI model is a triphone model whose recognition basic units are triphones, which captures the variations in real continuous speech spectra more accurately. Recognition and segmentation experiments are done on continuous digit audio-visual speech database, and results show that: MM-ADBN-TRI model obtains the best overall performance in word accuracy and phone segmentation results with time boundaries, and more reasonable asynchrony between audio and visual speech.

Original languageEnglish
Pages (from-to)297-301
Number of pages5
JournalDianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology
Volume31
Issue number2
StatePublished - Feb 2009

Keywords

  • Audio-visual
  • Dynamic Bayesian network
  • Phone segmentation
  • Speech recognition

Fingerprint

Dive into the research topics of 'DBN model based Multi-stream asynchrony triphone for audio-visual speech recognition and phone segmentation'. Together they form a unique fingerprint.

Cite this