DTRANSGAN: DEBLURRING TRANSFORMER BASED ON GENERATIVE ADVERSARIAL NETWORK

Kai Zhuang, Yuan Yuan, Qi Wang

科研成果: 书/报告/会议事项章节会议稿件同行评审

1 引用 (Scopus)

摘要

Motion deblurring is challenging due to the fast movements of the object or the camera itself. Existing methods usually try to liberate it by training CNN model or Generative Adversarial Networks(GAN). However, their methods can't restore the details very well. In this paper, a Deblurring Transformer based on Generative Adversarial Network(DTransGAN) is proposed to improve the deblurring performance of the vehicles under the surveillance camera scene. The proposed DTransGAN combines the low-level information and the high-level information through skip connection, which saves the original information of the image as much as possible to restore the details. Besides, we replace the convolution layer in the generator with the swin transformer block, which could pay more attention to the reconstruction of details. Finally, we create the vehicle motion blur dataset. It contains two parts, namely the clear image and the corresponding blurry image. Experiments on public datasets and the collected dataset report that DTransGAN achieves the state-of-the-art for motion deblurring task.

源语言英语
主期刊名2022 IEEE International Conference on Image Processing, ICIP 2022 - Proceedings
出版商IEEE Computer Society
701-705
页数5
ISBN(电子版)9781665496209
DOI
出版状态已出版 - 2022
活动29th IEEE International Conference on Image Processing, ICIP 2022 - Bordeaux, 法国
期限: 16 10月 202219 10月 2022

出版系列

姓名Proceedings - International Conference on Image Processing, ICIP
ISSN(印刷版)1522-4880

会议

会议29th IEEE International Conference on Image Processing, ICIP 2022
国家/地区法国
Bordeaux
时期16/10/2219/10/22

指纹

探究 'DTRANSGAN: DEBLURRING TRANSFORMER BASED ON GENERATIVE ADVERSARIAL NETWORK' 的科研主题。它们共同构成独一无二的指纹。

引用此