TY - JOUR
T1 - Learning pixel-adaptive weights for portrait photo retouching
AU - Wang, Binglu
AU - Lu, Chengzhe
AU - Yan, Dawei
AU - Zhao, Yongqiang
AU - Li, Ning
AU - Li, Xuelong
N1 - Publisher Copyright:
© 2023
PY - 2023/11
Y1 - 2023/11
N2 - The lookup table-based methods achieve promising retouching performance by learning image-adaptive weights to combine 3-dimensional lookup tables (3D LUTs) and conducting pixel-to-pixel color transformation. However, this paradigm ignores the local context cues and applies the same transformation to portrait pixels and background pixels that exhibit the same raw RGB values. In contrast, an expert usually conducts different operations to adjust the color temperatures, tones of portrait regions, and background regions. This inspires us to model local context cues to improve the retouching quality explicitly.Thus, the center pixel of an image patch is first retouched by predicting pixel-adaptive lookup table weights. To modulate the influence of neighboring pixels, as neighboring pixels exhibit different affinities to the center pixel, a local attention mask is estimated. Then, the quality of the local attention mask is further improved by applying supervision, which is based on the affinity map calculated by the ground-truth portrait mask. For group-level consistency, we propose to directly constrain the variance of mean color components in the Lab space. Extensive experiments on the PPR10K dataset demonstrate the effectiveness of the proposed method, the retouching performance on high-resolution photos is improved by over 0.5dB in terms of PSNR, and the group-level inconsistency is reduced by 2.1.
AB - The lookup table-based methods achieve promising retouching performance by learning image-adaptive weights to combine 3-dimensional lookup tables (3D LUTs) and conducting pixel-to-pixel color transformation. However, this paradigm ignores the local context cues and applies the same transformation to portrait pixels and background pixels that exhibit the same raw RGB values. In contrast, an expert usually conducts different operations to adjust the color temperatures, tones of portrait regions, and background regions. This inspires us to model local context cues to improve the retouching quality explicitly.Thus, the center pixel of an image patch is first retouched by predicting pixel-adaptive lookup table weights. To modulate the influence of neighboring pixels, as neighboring pixels exhibit different affinities to the center pixel, a local attention mask is estimated. Then, the quality of the local attention mask is further improved by applying supervision, which is based on the affinity map calculated by the ground-truth portrait mask. For group-level consistency, we propose to directly constrain the variance of mean color components in the Lab space. Extensive experiments on the PPR10K dataset demonstrate the effectiveness of the proposed method, the retouching performance on high-resolution photos is improved by over 0.5dB in terms of PSNR, and the group-level inconsistency is reduced by 2.1.
KW - 3D Lookup table
KW - Portrait photo retouching
KW - Visual attention
UR - http://www.scopus.com/inward/record.url?scp=85163807989&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2023.109775
DO - 10.1016/j.patcog.2023.109775
M3 - 文章
AN - SCOPUS:85163807989
SN - 0031-3203
VL - 143
JO - Pattern Recognition
JF - Pattern Recognition
M1 - 109775
ER -