Truncation Cross Entropy Loss for Remote Sensing Image Captioning

Xuelong Li, Xueting Zhang, Wei Huang, Qi Wang

Research output: Contribution to journalArticlepeer-review

83 Scopus citations

Abstract

Recently, remote sensing image captioning (RSIC) has drawn an increasing attention. In this field, the encoder-decoder-based methods have become the mainstream due to their excellent performance. In the encoder-decoder framework, the convolutional neural network (CNN) is used to encode a remote sensing image into a semantic feature vector, and a sequence model such as long short-term memory (LSTM) is subsequently adopted to generate a content-related caption based on the feature vector. During the traditional training stage, the probability of the target word at each time step is forcibly optimized to 1 by the cross entropy (CE) loss. However, because of the variability and ambiguity of possible image captions, the target word could be replaced by other words like its synonyms, and therefore, such an optimization strategy would result in the overfitting of the network. In this article, we explore the overfitting phenomenon in the RSIC caused by CE loss and correspondingly propose a new truncation cross entropy (TCE) loss, aiming to alleviate the overfitting problem. In order to verify the effectiveness of the proposed approach, extensive comparison experiments are performed on three public RSIC data sets, including UCM-captions, Sydney-captions, and RSICD. The state-of-the-art result of Sydney-captions and RSICD and the competitive results of UCM-captions achieved by TCE loss demonstrate that the proposed method is beneficial to RSIC.

Original languageEnglish
Article number9153154
Pages (from-to)5246-5257
Number of pages12
JournalIEEE Transactions on Geoscience and Remote Sensing
Volume59
Issue number6
DOIs
StatePublished - Jun 2021

Keywords

  • Image captioning
  • overfitting
  • remote sensing
  • truncation cross entropy (TCE) loss

Fingerprint

Dive into the research topics of 'Truncation Cross Entropy Loss for Remote Sensing Image Captioning'. Together they form a unique fingerprint.

Cite this