Skip to main navigation Skip to search Skip to main content

Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging

  • Jaime Zabalza
  • , Jinchang Ren
  • , Jiangbin Zheng
  • , Huimin Zhao
  • , Chunmei Qing
  • , Zhijing Yang
  • , Peijun Du
  • , Stephen Marshall
  • University of Strathclyde
  • Guangdong Technic Normal University
  • South China University of Technology
  • Guangdong University of Technology
  • Nanjing University

Research output: Contribution to journalArticlepeer-review

364 Scopus citations

Abstract

Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.

Original languageEnglish
Pages (from-to)1-10
Number of pages10
JournalNeurocomputing
Volume185
DOIs
StatePublished - 12 Apr 2016

Keywords

  • Data reduction
  • Deep learning (DL)
  • Hyperspectral remote sensing
  • Segmented stacked autoencoder (S-SAE)

Fingerprint

Dive into the research topics of 'Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging'. Together they form a unique fingerprint.

Cite this