A CNN–RNN architecture for multi-label weather recognition

Bin Zhao, Xuelong Li, Xiaoqiang Lu, Zhigang Wang

Research output: Contribution to journalArticlepeer-review

101 Scopus citations

Abstract

Weather Recognition plays an important role in our daily lives and many computer vision applications. However, recognizing the weather conditions from a single image remains challenging and has not been studied thoroughly. Generally, most previous works treat weather recognition as a single-label classification task, namely, determining whether an image belongs to a specific weather class or not. This treatment is not always appropriate, since more than one weather conditions may appear simultaneously in a single image. To address this problem, we make the first attempt to view weather recognition as a multi-label classification task, i.e., assigning an image more than one labels according to the displayed weather conditions. Specifically, a CNN–RNN based multi-label classification approach is proposed in this paper. The convolutional neural network (CNN) is extended with a channel-wise attention model to extract the most correlated visual features. The Recurrent Neural Network (RNN) further processes the features and excavates the dependencies among weather classes. Finally, the weather labels are predicted step by step. Besides, we construct two datasets for the weather recognition task and explore the relationships among different weather conditions. Experimental results demonstrate the superiority and effectiveness of the proposed approach. The new constructed datasets will be available at https://github.com/wzgwzg/Multi-Label-Weather-Recognition.

Original languageEnglish
Pages (from-to)47-57
Number of pages11
JournalNeurocomputing
Volume322
DOIs
StatePublished - 17 Dec 2018

Keywords

  • Convolutional LSTM
  • Multi-label classification
  • Weather recognition

Fingerprint

Dive into the research topics of 'A CNN–RNN architecture for multi-label weather recognition'. Together they form a unique fingerprint.

Cite this