Rotation-invariant feature learning for object detection in VHR optical remote sensing images by double-net

Zhi Zhang, Ruoqiao Jiang, Shaohui Mei, Shun Zhang, Yifan Zhang

Research output: Contribution to journalArticlepeer-review

20 Scopus citations

Abstract

Rotation-invariant feature extraction is crucial for object detection in Very High Resolution (VHR) optical remote sensing images. Although convolutional neural networks (CNNs) are good at extracting the translation-invariant features and have been widely applied in computer vision, it is still a challenging problem for CNNs to extract rotation-invariant features in VHR optical remote sensing images. In this paper we present a novel Double-Net with sample pairs from the same class as inputs to improve the performance of object detection and classification in VHR optical remote sensing images. Specifically, the proposed Double-Net contains multiple channels of CNNs in which each channel refers to a specific rotation direction and all CNNs share identical weights. Based on the output features of all channels, multiple instance learning algorithm is employed to extract the final rotation-invariant features. Experimental results on two publicly available benchmark datasets, namely Mnist-rot-12K and NWPU VHR-10, demonstrate that the presented Double-Net outperforms existing approaches on the performance of rotation-invariant feature extraction, which is especially effective under the situation of small training samples.

Original languageEnglish
Article number8936929
Pages (from-to)20818-20827
Number of pages10
JournalIEEE Access
Volume8
DOIs
StatePublished - 2020

Keywords

  • Convolution neural network (CNN)
  • feature learning
  • object detection
  • rotation-invariant
  • very high resolution (VHR)

Fingerprint

Dive into the research topics of 'Rotation-invariant feature learning for object detection in VHR optical remote sensing images by double-net'. Together they form a unique fingerprint.

Cite this