TY - JOUR
T1 - Lightweight Image Deblurring via Recurrent Gated Attention and Efficient Decoupling
AU - Chen, Jian
AU - Ye, Shilin
AU - Chen, Geng
AU - Atlaw, Meklit Mesfin
AU - Lin, Li
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - In recent years, deep learning has been significantly advancing the field of image deblurring. However, existing deep learning models usually rely on overloaded large kernel convolutions or overweighted attention modules. This leads to a heavy computational burden and restricts real applications. To address this issue, we propose a lightweight deblurring network, termed RGE-Net. Our RGE-Net possesses two novel features: 1) We propose a recurrent path into the convolutions to ensure each kernel weight can learn better and stronger feature information, thus increasing the parameter efficiency and reducing the parameters. Furthermore, we propose gated attention to suppress incorrect features flowing in the recurrent path, thus improving performance. 2) We decouple the kernels into spatial and channel components to reduce learning difficulty by reducing parameters and then perform an attention mechanism to obtain significant performance. Extensive experiments on benchmark datasets demonstrate the superiority of RGE-Net over state-of-the-art deblurring models in terms of both effectiveness and efficiency.
AB - In recent years, deep learning has been significantly advancing the field of image deblurring. However, existing deep learning models usually rely on overloaded large kernel convolutions or overweighted attention modules. This leads to a heavy computational burden and restricts real applications. To address this issue, we propose a lightweight deblurring network, termed RGE-Net. Our RGE-Net possesses two novel features: 1) We propose a recurrent path into the convolutions to ensure each kernel weight can learn better and stronger feature information, thus increasing the parameter efficiency and reducing the parameters. Furthermore, we propose gated attention to suppress incorrect features flowing in the recurrent path, thus improving performance. 2) We decouple the kernels into spatial and channel components to reduce learning difficulty by reducing parameters and then perform an attention mechanism to obtain significant performance. Extensive experiments on benchmark datasets demonstrate the superiority of RGE-Net over state-of-the-art deblurring models in terms of both effectiveness and efficiency.
KW - decoupled network
KW - gated attention
KW - Lightweight image deblurring
KW - recurrent neural network
UR - http://www.scopus.com/inward/record.url?scp=85207437668&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2024.3484392
DO - 10.1109/TCSVT.2024.3484392
M3 - 文章
AN - SCOPUS:85207437668
SN - 1051-8215
VL - 35
SP - 1814
EP - 1824
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 2
ER -