Attention-guided video super-resolution with recurrent multi-scale spatial–temporal transformer

Wei Sun, Xianguang Kong, Yanning Zhang

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Video super-resolution (VSR) aims to recover the high-resolution (HR) contents from the low-resolution (LR) observations relying on compositing the spatial–temporal information in the LR frames. It is crucial to propagate and aggregate spatial–temporal information. Recently, while transformers show impressive performance on high-level vision tasks, few attempts have been made on image restoration, especially on VSR. In addition, previous transformers simultaneously process spatial–temporal information, easily synthesizing confused textures and high computational cost limit its development. Towards this end, we construct a novel bidirectional recurrent VSR architecture. Our model disentangles the task of learning spatial–temporal information into two easier sub-tasks, each sub-task focuses on propagating and aggregating specific information with a multi-scale transformer-based design, which alleviates the difficulty of learning. Additionally, an attention-guided motion compensation module is applied to get rid of the influence of misalignment between frames. Experiments on three widely used benchmark datasets show that, relying on superior feature correlation learning, the proposed network can outperform previous state-of-the-art methods, especially for recovering the fine details.

Original languageEnglish
Pages (from-to)3989-4002
Number of pages14
JournalComplex and Intelligent Systems
Volume9
Issue number4
DOIs
StatePublished - Aug 2023

Keywords

  • Attention mechanism
  • Motion compensation
  • Spatial-temporal transformer
  • Video super-resolution

Fingerprint

Dive into the research topics of 'Attention-guided video super-resolution with recurrent multi-scale spatial–temporal transformer'. Together they form a unique fingerprint.

Cite this