A novel framework for image-to-image translation and image compression

Fei Yang, Yaxing Wang, Luis Herranz, Yongmei Cheng, Mikhail G. Mozerov

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

Data-driven paradigms using machine learning are becoming ubiquitous in image processing and communications. In particular, image-to-image (I2I) translation is a generic and widely used approach to image processing problems, such as image synthesis, style transfer, and image restoration. At the same time, neural image compression has emerged as a data-driven alternative to traditional coding approaches in visual communications. In this paper, we study the combination of these two paradigms into a joint I2I compression and translation framework, focusing on multi-domain image synthesis. We first propose distributed I2I translation by integrating quantization and entropy coding into an I2I translation framework (i.e. I2Icodec). In practice, the image compression functionality (i.e. autoencoding) is also desirable, requiring to deploy alongside I2Icodec a regular image codec. Thus, we further propose a unified framework that allows both translation and autoencoding capabilities in a single codec. Adaptive residual blocks conditioned on the translation/compression mode provide flexible adaptation to the desired functionality. The experiments show promising results in both I2I translation and image compression using a single model.

Original languageEnglish
Pages (from-to)58-70
Number of pages13
JournalNeurocomputing
Volume508
DOIs
StatePublished - 7 Oct 2022

Keywords

  • Autoencoder
  • Communication
  • Image compression
  • Image-to-image translation

Fingerprint

Dive into the research topics of 'A novel framework for image-to-image translation and image compression'. Together they form a unique fingerprint.

Cite this