Efficient dynamic scene deblurring using spatially variant deconvolution network with optical flow guided training

Yuan Yuan, Wei Su, Dandan Ma

科研成果: 期刊稿件会议文章同行评审

108 引用 (Scopus)

摘要

In order to remove the non-uniform blur of images captured from dynamic scenes, many deep learning based methods design deep networks for large receptive fields and strong fitting capabilities, or use multi-scale strategy to deblur image on different scales gradually. Restricted by the fixed structures and parameters, these methods are always huge in model size to handle complex blurs. In this paper, we start from the deblurring deconvolution operation, then design an effective and real-time deblurring network. The main contributions are three folded, 1) we construct a spatially variant deconvolution network using modulated deformable convolutions, which can adjust receptive fields adaptively according to the blur features. 2) our analysis shows the sampling points of deformable convolution can be used to approximate the blur kernel, which can be simplified to bi-directional optical flows. So the position learning of sampling points can be supervised by bi-directional optical flows. 3) we build a light-weighted backbone for image restoration problem, which can balance the calculations and effectiveness well. Experimental results show that the proposed method achieves state-of-the-art deblurring performance, but with less parameters and shorter running time.

源语言英语
文章编号9157149
页(从-至)3552-3561
页数10
期刊Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
DOI
出版状态已出版 - 2020
活动2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020 - Virtual, Online, 美国
期限: 14 6月 202019 6月 2020

指纹

探究 'Efficient dynamic scene deblurring using spatially variant deconvolution network with optical flow guided training' 的科研主题。它们共同构成独一无二的指纹。

引用此