Abstract
Lip-reading is a process of interpreting speech by visually analysing lip movements. Recent research in this area has shifted from simple word recognition to lip-reading sentences in the wild. This paper attempts to use phonemes as a classification schema for lip-reading sentences to explore an alternative schema and to enhance system performance. Different classification schemas have been investigated, including character-based and visemes-based schemas. The visual front-end model of the system consists of a Spatial-Temporal (3D) convolution followed by a 2D ResNet. Transformers utilise multi-headed attention for phoneme recognition models. For the language model, a Recurrent Neural Network is used. The performance of the proposed system has been testified with the BBC Lip Reading Sentences 2 (LRS2) benchmark dataset. Compared with the state-of-the-art approaches in lip-reading sentences, the proposed system has demonstrated an improved performance by a 10% lower word error rate on average under varying illumination ratios.
Original language | English |
---|---|
Pages (from-to) | 129-138 |
Number of pages | 10 |
Journal | CAAI Transactions on Intelligence Technology |
Volume | 8 |
Issue number | 1 |
DOIs | |
State | Published - Mar 2023 |
Keywords
- deep learning
- deep neural networks
- lip-reading
- phoneme-based lip-reading
- spatial-temporal convolution
- transformers