Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging

Jaime Zabalza, Jinchang Ren, Jiangbin Zheng, Huimin Zhao, Chunmei Qing, Zhijing Yang, Peijun Du, Stephen Marshall

科研成果: 期刊稿件文章同行评审

350 引用 (Scopus)

摘要

Stacked autoencoders (SAEs), as part of the deep learning (DL) framework, have been recently proposed for feature extraction in hyperspectral remote sensing. With the help of hidden nodes in deep layers, a high-level abstraction is achieved for data reduction whilst maintaining the key information of the data. As hidden nodes in SAEs have to deal simultaneously with hundreds of features from hypercubes as inputs, this increases the complexity of the process and leads to limited abstraction and performance. As such, segmented SAE (S-SAE) is proposed by confronting the original features into smaller data segments, which are separately processed by different smaller SAEs. This has resulted in reduced complexity but improved efficacy of data abstraction and accuracy of data classification.

源语言英语
页(从-至)1-10
页数10
期刊Neurocomputing
185
DOI
出版状态已出版 - 12 4月 2016

指纹

探究 'Novel segmented stacked autoencoder for effective dimensionality reduction and feature extraction in hyperspectral imaging' 的科研主题。它们共同构成独一无二的指纹。

引用此