单幅人脸图像的全景纹理图生成方法

Translated title of the contribution: Single face image-based panoramic texture map generation

Yang Liu, Yangyu Fan, Zhe Guo, Guoyun Lyu, Shiya Liu

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Objective: Face texture map generation is a key part of face identification research, in which the face texture can be used to map the pixel information in a two-dimensional (2D) image to the corresponding 3D face model. Currently, there are two initial ways to acquire a face texture. The first one is based on full coverage of head scanning by a laser machine, and the other one is on the face image information. The high accuracy scanning process is assigned for a manipulated circumstance, and captures appearance information well. However, this method is mostly adopted for collecting images for database. The original face texture map based on a 2D image is obtained via splicing the captured image of a targeted head in various of viewing angles simply. Some researchers use raw texture images from five views jointly, which means face texture reconstruction is done under restricted conditions. This method can recover all of the details of the human head according to the pixel information between the complementary face images precisely, but it is difficult to apply in reality, and the different angles images capture illustrate transformations in facial lighting and camera parameters that will cause discontinuous pixel changes in the generated texture. As the pixel information is incomplete for a solo face image, the general method is to perform the texture mapping based on the pixel distribution of the 3D face model in the ultraviolet (UV) space. The overall face-and-head texture can be recovered with pixel averaging and pixel interpolation processes by filling the missing area, but the obtained pixel distribution is quite inconsistent with the original image. A 3D morphable model (3DMM) can restore the facial texture map in a single image, and the 3DMM texture can assign 3D pixel data into the 2D plane with per-pixel alignment based on UV map interpretation. Nevertheless, the texture statistical model is demonstrated to scan under constrained conditions to acquire the low-high frequency and albedo information. This kind of texture model is obtained with some difficulty and it is also challenging for "in-the-wild" image analysis. Meanwhile, such methods cannot recover complicated skin pigment changes and identify layered texture details (such as freckles, pores, moles and surface hair).In general, facial texture reconstruct maps from a solo face image is to be challenged. First, effective pixel information of the profile and the head region will be lost in a solo face image due to the fixed posture, and the UV texture map obtained by conventional ways is incomplete; Next, it is difficult to recover the photorealistic texture from the unrestricted image because the light conditions and camera parameters cannot be confirmed in unconstrained circumstances. Method A method for generating face panoramic texture maps is proposed based on the generative adversarial networks. The method illustrates the correlative feature between the 2D face image and 3D face mode to obtain the face parameters from the input face image, and an structure is designed that integrates the characteristics of the variational auto-encoder and generative adversarial networks to learn the face-and-head texture features. These face parameter vectors are converted into latent vectors and added as the condition attributes to constrain the generation process of the networks. A panoramic texture map generation model training is conducted on our facial texture dataset. Simultaneously, various attribute discriminators are demonstrated to evaluate and feed the output results back to improve the integrity and authenticity of the result. A face UV texture database is to be built up, some of the samples of which are from the WildUV dataset, which contains nearly 2 000 texture images of individuals with different identities and 5 638 unique facial UV texture maps. In addition, some texture data are obtained via professional 3D scanning testbed. Approximately 400 testers with different identities (250 males, 150 females) offered 2 000 various UV texture maps. Moreover, data augmentation was implemented on perfect texture images. Finally, a total of 10 143 texture samples were used in the demonstration. The samples provide credible data for the generative model. Result The results were compared with the state-of-the-art face texture map generation methods. Test images were randomly opted from the CelebA-HQ and labled faces in the wild (LFW) dataset. Based on visual comparison of the generated results, the generated textures are mapped to improve the corresponding 3D models, and it is clear that the results are mapped more completely on the model and reveal more realistic digital examples. Meanwhile, a quantitative evaluation for the completeness of the generated face texture map and the accuracy of the facial region are conducted. The reliability of the restoration of the invisible area in the original image and the capability to retain the facial features were evaluated with peak signal to noise ratio(PSNR) and structural similarity index(SSIM) parameters quantitatively. Conclusion: The results of comparative tests demonstrate that the method for generating a panoramic texture map of solo face can improve incomplete facial texture reconstruction from a solo face image, and facilitate the texture details of the generated texture map. The characteristics of face parameters and generative network models can make the output facial texture maps more complete, especially for the invisible areas of the original image. The pixels are restored clearly and consistently and the texture details are more real.

Translated title of the contributionSingle face image-based panoramic texture map generation
Original languageChinese (Traditional)
Pages (from-to)602-613
Number of pages12
JournalJournal of Image and Graphics
Volume27
Issue number2
DOIs
StatePublished - 16 Feb 2022

Fingerprint

Dive into the research topics of 'Single face image-based panoramic texture map generation'. Together they form a unique fingerprint.

Cite this