TY - JOUR
T1 - Deep Supervised and Contractive Neural Network for SAR Image Classification
AU - Geng, Jie
AU - Wang, Hongyu
AU - Fan, Jianchao
AU - Ma, Xiaorui
N1 - Publisher Copyright:
© 1980-2012 IEEE.
PY - 2017/4
Y1 - 2017/4
N2 - The classification of a synthetic aperture radar (SAR) image is a significant yet challenging task, due to the presence of speckle noises and the absence of effective feature representation. Inspired by deep learning technology, a novel deep supervised and contractive neural network (DSCNN) for SAR image classification is proposed to overcome these problems. In order to extract spatial features, a multiscale patch-based feature extraction model that consists of gray level-gradient co-occurrence matrix, Gabor, and histogram of oriented gradient descriptors is developed to obtain primitive features from the SAR image. Then, to get discriminative representation of initial features, the DSCNN network that comprises four layers of supervised and contractive autoencoders is proposed to optimize features for classification. The supervised penalty of the DSCNN can capture the relevant information between features and labels, and the contractive restriction aims to enhance the locally invariant and robustness of the encoding representation. Consequently, the DSCNN is able to produce effective representation of sample features and provide superb predictions of the class labels. Moreover, to restrain the influence of speckle noises, a graph-cut-based spatial regularization is adopted after classification to suppress misclassified pixels and smooth the results. Experiments on three SAR data sets demonstrate that the proposed method is able to yield superior classification performance compared with some related approaches.
AB - The classification of a synthetic aperture radar (SAR) image is a significant yet challenging task, due to the presence of speckle noises and the absence of effective feature representation. Inspired by deep learning technology, a novel deep supervised and contractive neural network (DSCNN) for SAR image classification is proposed to overcome these problems. In order to extract spatial features, a multiscale patch-based feature extraction model that consists of gray level-gradient co-occurrence matrix, Gabor, and histogram of oriented gradient descriptors is developed to obtain primitive features from the SAR image. Then, to get discriminative representation of initial features, the DSCNN network that comprises four layers of supervised and contractive autoencoders is proposed to optimize features for classification. The supervised penalty of the DSCNN can capture the relevant information between features and labels, and the contractive restriction aims to enhance the locally invariant and robustness of the encoding representation. Consequently, the DSCNN is able to produce effective representation of sample features and provide superb predictions of the class labels. Moreover, to restrain the influence of speckle noises, a graph-cut-based spatial regularization is adopted after classification to suppress misclassified pixels and smooth the results. Experiments on three SAR data sets demonstrate that the proposed method is able to yield superior classification performance compared with some related approaches.
KW - Contractive autoencoder (AE)
KW - deep neural network (DNN)
KW - supervised classification
KW - synthetic aperture radar (SAR) image
UR - http://www.scopus.com/inward/record.url?scp=85010190008&partnerID=8YFLogxK
U2 - 10.1109/TGRS.2016.2645226
DO - 10.1109/TGRS.2016.2645226
M3 - 文章
AN - SCOPUS:85010190008
SN - 0196-2892
VL - 55
SP - 2442
EP - 2459
JO - IEEE Transactions on Geoscience and Remote Sensing
JF - IEEE Transactions on Geoscience and Remote Sensing
IS - 4
M1 - 7827114
ER -