TY - GEN
T1 - Boosting No-Reference Super-Resolution Image Quality Assessment with Knowledge Distillation and Extension
AU - Zhang, Haiyu
AU - Su, Shaolin
AU - Zhu, Yu
AU - Sun, Jinqiu
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Deep learning (DL) based image super-resolution (SR) tech-niques have been well investigated for recent years. However, studies dedicated to SR image quality assessment (SR-IQA) have not been fully developed, which is even more difficult if pristine high-resolution (HR) images are lacking as a reference. Due to the challenge, existing widely used no-reference (NR) SR-IQA metrics (e.g., PI, NIQE, and Ma) are still far from meeting the practical requirements of providing accurate estimations which align well with human mean opinion scores (MOS). To this end, we propose a novel Knowledge Extension Super-Resolution Image Quality Assessment (KE-SR-IQA) framework to predict SR image quality by leveraging a semi-supervised knowledge distillation (KD) strategy. Concretely, we first employ a well-trained full-reference (FR) SR-IQA model as the teacher, then we perform knowledge extension (KE) by additional pseudo-labeled data to further distill a NR-student for promoting the prediction accuracy. Extensive experiments on several benchmarks validate the ef-fectiveness of our approach.
AB - Deep learning (DL) based image super-resolution (SR) tech-niques have been well investigated for recent years. However, studies dedicated to SR image quality assessment (SR-IQA) have not been fully developed, which is even more difficult if pristine high-resolution (HR) images are lacking as a reference. Due to the challenge, existing widely used no-reference (NR) SR-IQA metrics (e.g., PI, NIQE, and Ma) are still far from meeting the practical requirements of providing accurate estimations which align well with human mean opinion scores (MOS). To this end, we propose a novel Knowledge Extension Super-Resolution Image Quality Assessment (KE-SR-IQA) framework to predict SR image quality by leveraging a semi-supervised knowledge distillation (KD) strategy. Concretely, we first employ a well-trained full-reference (FR) SR-IQA model as the teacher, then we perform knowledge extension (KE) by additional pseudo-labeled data to further distill a NR-student for promoting the prediction accuracy. Extensive experiments on several benchmarks validate the ef-fectiveness of our approach.
KW - knowledge distillation (KD)
KW - knowledge extension (KE)
KW - no-reference (NR)
KW - Super-resolution image quality assessment (SR-IQA)
UR - http://www.scopus.com/inward/record.url?scp=85177586708&partnerID=8YFLogxK
U2 - 10.1109/ICASSP49357.2023.10095465
DO - 10.1109/ICASSP49357.2023.10095465
M3 - 会议稿件
AN - SCOPUS:85177586708
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
BT - ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Y2 - 4 June 2023 through 10 June 2023
ER -