SA-AE for Any-to-Any Relighting

Zhongyun Hu, Xin Huang, Yaning Li, Qing Wang

科研成果: 书/报告/会议事项章节会议稿件同行评审

5 引用 (Scopus)

摘要

In this paper, we present a novel automatic model Self-Attention AutoEncoder (SA-AE) for generating a relit image from a source image to match the illumination setting of a guide image, which is called any-to-any relighting. In order to reduce the difficulty of learning, we adopt an implicit scene representation learned by the encoder to render the relit image using the decoder. Based on the learned scene representation, a lighting estimation network is designed as a classification task to predict the illumination settings from the guide images. Also, a lighting-to-feature network is well designed to recover the corresponding implicit scene representation from the illumination settings, which is the inverse process of the lighting estimation network. In addition, a self-attention mechanism is introduced in the autoencoder to focus on the re-rendering of the relighting-related regions in the source images. Extensive experiments on the VIDIT dataset show that the proposed approach achieved the 1st place in terms of MPS and the 1st place in terms of SSIM in the AIM 2020 Any-to-any Relighting Challenge.

源语言英语
主期刊名Computer Vision – ECCV 2020 Workshops, Proceedings
编辑Adrien Bartoli, Andrea Fusiello
出版商Springer Science and Business Media Deutschland GmbH
535-549
页数15
ISBN(印刷版)9783030670696
DOI
出版状态已出版 - 2020
活动Workshops held at the 16th European Conference on Computer Vision, ECCV 2020 - Glasgow, 英国
期限: 23 8月 202028 8月 2020

出版系列

姓名Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
12537 LNCS
ISSN(印刷版)0302-9743
ISSN(电子版)1611-3349

会议

会议Workshops held at the 16th European Conference on Computer Vision, ECCV 2020
国家/地区英国
Glasgow
时期23/08/2028/08/20

指纹

探究 'SA-AE for Any-to-Any Relighting' 的科研主题。它们共同构成独一无二的指纹。

引用此