Convolutional neural networks based hyperspectral image classification method with adaptive kernels

Research output: Contribution to journalArticlepeer-review

54 Scopus citations

Abstract

Hyperspectral image (HSI) classification aims at assigning each pixel a pre-defined class label, which underpins lots of vision related applications, such as remote sensing, mineral exploration and ground object identification, etc. Lots of classification methods thus have been proposed for better hyperspectral imagery interpretation. Witnessing the success of convolutional neural networks (CNNs) in the traditional images based classification tasks, plenty of efforts have been made to leverage CNNs to improve HSI classification. An advanced CNNs architecture uses the kernels generated from the clustering method, such as a K-means network uses K-means to generate the kernels. However, the above methods are often obtained heuristically (e.g., the number of kernels should be assigned manually), and how to data-adaptively determine the number of convolutional kernels (i.e., filters), and thus generate the kernels that better represent the data, are seldom studied in existing CNNs based HSI classification methods. In this study, we propose a new CNNs based HSI classification method where the convolutional kernels can be automatically learned from the data through clustering without knowing the cluster number. With those data-adaptive kernels, the proposed CNNs method achieves better classification results. Experimental results from the datasets demonstrate the effectiveness of the proposed method.

Original languageEnglish
Article number618
JournalRemote Sensing
Volume9
Issue number6
DOIs
StatePublished - 1 Jun 2017

Keywords

  • Adaptive convolutional kernels
  • Automatic cluster number determination
  • Hyperspectral image classification

Fingerprint

Dive into the research topics of 'Convolutional neural networks based hyperspectral image classification method with adaptive kernels'. Together they form a unique fingerprint.

Cite this