Efficient Image Enhancement with A Diffusion-Based Frequency Prior

Qingsen Yan, Tao Hu, Peng Wu, Duwei Dai, Shuhang Gu, Wei Dong, Yanning Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

Due to the lack of appropriate priors, generating the content of dark regions remains a challenge in low-light image enhancement tasks. Currently, diffusion models employ robust image generation capabilities for enhancing low-light images. However, diffusion models require multiple iterations at the image feature level to generate details and content, which limits the speed. Moreover, the diffusion-based methods tend to generate unexpected artifacts in the degraded regions. To address these issues, we propose a Frequency Priors-guided Image Enhancement (FPIE) network, including a frequency prior generation network and an image restoration network. FPIE significantly accelerates inference by learning abstract prior with frequency domain constraints. Concretely, to learn compacted priors at the frequency domain, we introduce a joint training approach for the prior generation and restoration models to constrain the distribution of priors. Furthermore, to better utilize frequency-domain features for enhancing the network's generation capabilities, a wavelet-based transformer block is introduced to produce intricate details and avoid the artifacts of the output. Extensive experimental results on the commonly used benchmarks demonstrate that our approach achieves state-of-the-art performances and well generalization to real-world images.

Keywords

  • diffusion model
  • frequency prior
  • image enhancement
  • Low-light image

Fingerprint

Dive into the research topics of 'Efficient Image Enhancement with A Diffusion-Based Frequency Prior'. Together they form a unique fingerprint.

Cite this