跳到主要导航 跳到搜索 跳到主要内容

DT-RSRGAN: An one-off domain translation generative model for real image super-resolution

  • Northwestern Polytechnical University Xian
  • Autonomous University of Barcelona

科研成果: 期刊稿件文章同行评审

10 引用 (Scopus)

摘要

In single image super-resolution (SISR) tasks, there is inevitably a “domain gap” between synthetic and realistic datasets, which leads to performance drop accordingly. Domain translation (DT) based approaches have then emerged to narrow the discrepancy by converting data across source and target domains while maintaining semantic consistency. Currently, one-off and two-stage models constitute the main DT-SISR methods. However, due to the inability to incorporate prior knowledge of pre-trained SR networks, one-off methods often demonstrate inferior performance to the structurally complex two-stage models. To achieve both simplicity and performance gain, we propose an one-off DT-SISR model DT-RSRGAN for real-world SISR (RW-SISR). Our underlying principle is to recover LR observations via exploring vision transformer (ViT) based on self-attention (SA) mechanisms in adversarial generative models, aiming to fully explore knowledge of image internal correlation in the absence of external prior information. We then devise an image complexity (IC) loss in DT-RSRGAN, serving as a relaxed form of constraint in the absence of high-resolution (HR) training references for the one-off condition, thus suppressing artifacts that haunt GAN-based SR results. The aforementioned measures collectively facilitate the implementation of DT-RSRGAN in an one-off manner while achieving competitive performance compared to state-of-the-art (SOTA) DT-SISR solutions. Extensive experiments on multiple benchmarks validate the effectiveness and superiority of DT-RSRGAN towards RW-SISR issues.

源语言英语
文章编号111944
期刊Pattern Recognition
169
DOI
出版状态已出版 - 1月 2026

指纹

探究 'DT-RSRGAN: An one-off domain translation generative model for real image super-resolution' 的科研主题。它们共同构成独一无二的指纹。

引用此