SODA: Weakly Supervised Temporal Action Localization Based on Astute Background Response and Self-Distillation Learning

Tao Zhao, Junwei Han, Le Yang, Binglu Wang, Dingwen Zhang

Research output: Contribution to journalArticlepeer-review

34 Scopus citations

Abstract

Weakly supervised temporal action localization is a practical yet challenging task. Although great efforts have been made in recent years, the existing methods still have limited capacity in dealing with the challenges of over-localization, joint-localization, and under-localization. Based on our investigation, the first two challenges arise from insufficient ability to suppress background response, while the third challenge is due to the lack of discovering action frames. To better address these challenges, we first propose the astute background response strategy. By enforcing the classification target of the background category to be zero, such a strategy can endow the conductive effect between video-level classification and frame-level classification, thus guiding the action category to suppress responses at background frames astutely and helping address the over-localization and joint-localization challenges. For alleviating the under-localization challenge, we introduce the self-distillation learning strategy. It simultaneously learns one master network and multiple auxiliary networks, where the auxiliary networks enhance the master network to discover complete action frames. Experimental results on three benchmarks demonstrate the favorable performance of the proposed method against previous counterparts, and its efficacy to tackle the existing three challenges.

Original languageEnglish
Pages (from-to)2474-2498
Number of pages25
JournalInternational Journal of Computer Vision
Volume129
Issue number8
DOIs
StatePublished - Aug 2021

Keywords

  • Background response
  • Self-distillation learning
  • Temporal action localization

Fingerprint

Dive into the research topics of 'SODA: Weakly Supervised Temporal Action Localization Based on Astute Background Response and Self-Distillation Learning'. Together they form a unique fingerprint.

Cite this