Reconstruction of Hyperspectral Data from RGB Images with Prior Category Information

  • Longbin Yan
  • , Xiuheng Wang
  • , Min Zhao
  • , Maboud Kaloorazi
  • , Jie Chen
  • , Susanto Rahardja

Research output: Contribution to journalArticlepeer-review

55 Scopus citations

Abstract

Hyperspectral recovery using RGB images has recently attracted considerable attention in many imaging and computer vision applications because of its ability to equip a low cost tool in acquiring spectral signatures of natural scenes. Current methods of recovering hyperspectral information via RGB measurements may fail for objects sharing similar RGB features. In this paper, we introduce a novel framework with the U-net-based architecture, namely C2H-Net, which is used to reconstruct high quality hyperspectral images from their RGB measurements. C2H-Net also exploits prior information comprising of category and coordinate information of specific objects of interest to address the restriction of the existing methods. C2H-Net is highly accurate and outputs 'true' spectral information of objects/scenes. In addition, a new hyperspectral dataset namely C2H-Data (available at Github) is developed in this work and used for additional extensive evaluation on the proposed framework. The C2H-Data contains a variety of objects with large number of images and category information which would be useful for the research community. We conduct experiments using three different datasets to show the effectiveness of C2H-Net. The experimental results show that our proposed method outperforms several existing methods.

Original languageEnglish
Article number9109715
Pages (from-to)1070-1081
Number of pages12
JournalIEEE Transactions on Computational Imaging
Volume6
DOIs
StatePublished - 2020

Keywords

  • Deep convolution neural network (CNN)
  • hyperspectral imaging
  • image synthesis
  • multichannel image reconstruction

Fingerprint

Dive into the research topics of 'Reconstruction of Hyperspectral Data from RGB Images with Prior Category Information'. Together they form a unique fingerprint.

Cite this