摘要
3D face scans have been widely used for face modeling and analysis. Due to the fact that face scans provide variable point clouds across frames, they may not capture complete facial data or miss point-to-point correspondences across various facial scans, thus causing difficulties to use such data for analysis. This paper presents an efficient approach to representing facial shapes from face scans through the reconstruction of face models based on regional information and a generic model. A new approach for 3D feature detection and a hybrid approach using two vertex mapping algorithms, displacement mapping and point-to-surface mapping, and a regional blending algorithm are proposed to reconstruct the facial surface detail. The resulting models can represent individual facial shapes consistently and adaptively, establishing facial point correspondences across individual models. The accuracy of the generated models is evaluated quantitatively. The applicability of the models is validated through the application of 3D facial expression recognition using the static 3DFE and dynamic 4DFE databases. A comparison with the state of the art has also been reported.
源语言 | 英语 |
---|---|
页(从-至) | 750-761 |
页数 | 12 |
期刊 | Image and Vision Computing |
卷 | 30 |
期 | 10 |
DOI | |
出版状态 | 已出版 - 10月 2012 |