TY - JOUR
T1 - All-in-focus synthetic aperture imaging using generative adversarial network-based semantic inpainting
AU - Pei, Zhao
AU - Jin, Min
AU - Zhang, Yanning
AU - Ma, Miao
AU - Yang, Yee Hong
N1 - Publisher Copyright:
© 2020 Elsevier Ltd
PY - 2021/3
Y1 - 2021/3
N2 - Occlusions handling poses a significant challenge to many computer vision and pattern recognition applications. Recently, Synthetic Aperture Imaging (SAI), which uses more than two cameras, is widely applied to reconstruct occluded objects in complex scenes. However, it usually fails in cases of heavy occlusions, in particular, when the occluded information is not captured by any of the camera views. Hence, it is a challenging task to generate a realistic all-in-focus synthetic aperture image which shows a completely occluded object. In this paper, semantic inpainting using a Generative Adversarial Network (GAN) is proposed to address the above-mentioned problem. The proposed method first computes a synthetic aperture image of the occluded objects using a labeling method, and an alpha matte of the partially occluded objects. Then, it uses energy minimization to reconstruct the background by focusing on the background depth of each camera. Finally, the occluded regions of the synthesized image are semantically inpainted using a GAN and the results are composited with the reconstructed background to generate a realistic all-in-focus image. The experimental results demonstrate that the proposed method can handle heavy occlusions and can produce better all-in-focus images than other state-of-the-art methods. Compared with traditional labeling methods, our method can quickly generate label for occlusion without introducing noise. To the best of our knowledge, our method is the first to address missing information caused by heavy occlusions in SAI using a GAN.
AB - Occlusions handling poses a significant challenge to many computer vision and pattern recognition applications. Recently, Synthetic Aperture Imaging (SAI), which uses more than two cameras, is widely applied to reconstruct occluded objects in complex scenes. However, it usually fails in cases of heavy occlusions, in particular, when the occluded information is not captured by any of the camera views. Hence, it is a challenging task to generate a realistic all-in-focus synthetic aperture image which shows a completely occluded object. In this paper, semantic inpainting using a Generative Adversarial Network (GAN) is proposed to address the above-mentioned problem. The proposed method first computes a synthetic aperture image of the occluded objects using a labeling method, and an alpha matte of the partially occluded objects. Then, it uses energy minimization to reconstruct the background by focusing on the background depth of each camera. Finally, the occluded regions of the synthesized image are semantically inpainted using a GAN and the results are composited with the reconstructed background to generate a realistic all-in-focus image. The experimental results demonstrate that the proposed method can handle heavy occlusions and can produce better all-in-focus images than other state-of-the-art methods. Compared with traditional labeling methods, our method can quickly generate label for occlusion without introducing noise. To the best of our knowledge, our method is the first to address missing information caused by heavy occlusions in SAI using a GAN.
KW - Image inpainting
KW - Occlusions handling
KW - Synthetic aperture imaging
UR - http://www.scopus.com/inward/record.url?scp=85091666364&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2020.107669
DO - 10.1016/j.patcog.2020.107669
M3 - 文章
AN - SCOPUS:85091666364
SN - 0031-3203
VL - 111
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 107669
ER -