Multi-stream articulator model with adaptive reliability measure for audio visual speech recognition

Lei Xie, Zhi Qiang Liu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

We propose a multi-stream articulator model (MSAM) for audio visual speech recognition (AVSR). This model extends the articulator modelling technique recently used in audio-only speech recognition to audio-visual domain. A multiple-stream structure with a shared articulator layer is used in the model to mimic the speech production process. We also present an adaptive reliability measure (ARM) based on two local dispersion indicators, integrating audio and visual streams with local, temporal reliability. Experiments on the AVCONDIG database shows that our model can achieve comparable recognition performance with the multi-stream hidden Markov model (MSHMM) under various noisy conditions. With the help of the ARM, our model even performs the best at some testing SNRs.

Original languageEnglish
Title of host publicationAdvances in Machine Learning and Cybernetics - 4th International Conference, ICMLC 2005, Revised Selected Papers
Pages994-1004
Number of pages11
DOIs
StatePublished - 2006
Externally publishedYes
Event4th International Conference on Machine Learning and Cybernetics, ICMLC 2005 - Guangzhou, China
Duration: 18 Aug 200521 Aug 2005

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume3930 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference4th International Conference on Machine Learning and Cybernetics, ICMLC 2005
Country/TerritoryChina
CityGuangzhou
Period18/08/0521/08/05

Fingerprint

Dive into the research topics of 'Multi-stream articulator model with adaptive reliability measure for audio visual speech recognition'. Together they form a unique fingerprint.

Cite this