VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection

Yuan Yuan, Zhitong Xiong, Qi Wang

Research output: Contribution to journalArticlepeer-review

167 Scopus citations

Abstract

Although traffic sign detection has been studied for years and great progress has been made with the rise of deep learning technique, there are still many problems remaining to be addressed. For complicated real-world traffic scenes, there are two main challenges. First, traffic signs are usually small-sized objects, which makes them more difficult to detect than large ones; second, it is hard to distinguish false targets which resemble real traffic signs in complex street scenes without context information. To handle these problems, we propose a novel end-to-end deep learning method for traffic sign detection in complex environments. Our contributions are as follows: 1) we propose a multi-resolution feature fusion network architecture which exploits densely connected deconvolution layers with skip connections, and can learn more effective features for a small-size object and 2) we frame the traffic sign detection as a spatial sequence classification and regression task, and propose a vertical spatial sequence attention module to gain more context information for better detection performance. To comprehensively evaluate the proposed method, we experiment on several traffic sign datasets as well as the general object detection dataset, and the results have shown the effectiveness of our proposed method.

Original languageEnglish
Article number8632977
Pages (from-to)3423-3434
Number of pages12
JournalIEEE Transactions on Image Processing
Volume28
Issue number7
DOIs
StatePublished - Jul 2019

Keywords

  • context modeling
  • sequence attention model
  • small object
  • Traffic sign detection

Fingerprint

Dive into the research topics of 'VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection'. Together they form a unique fingerprint.

Cite this