POLO: Learning Explicit Cross-Modality Fusion for Temporal Action Localization

Binglu Wang, Le Yang, Yongqiang Zhao

Research output: Contribution to journalArticlepeer-review

28 Scopus citations

Abstract

Temporal action localization aims at discovering action instances in untrimmed videos, where RGB and flow are two widely used feature modalities. Specifically, RGB chiefly reveals appearance and flow mainly depicts motion. Given RGB and flow features, previous methods employ the early fusion or late fusion paradigm to mine the complementarity between them. By concatenating raw RGB and flow features, the early fusion implicitly achieved complementarity by the network, but it partly discards the particularity of each modality. The late fusion independently maintains two branches to explore the particularity of each modality, but it only fuses the localization results, which is insufficient to mine the complementarity. In this work, we propose explicit cross-modality fusion (POLO) to effectively utilize the complementarity between two modalities and thoroughly explore the particularity of each modality. POLO performs cross-modality fusion via estimating the attention weight from RGB modality and employing it to flow modality (vice versa). This bridges the complementarity of one modality to supply the other. Assisted with the attention weight, POLO independently learns from RGB and flow features and explores the particularity of each modality. Extensive experiments on two benchmarks demonstrate the preferable performance of POLO.

Original languageEnglish
Article number9362259
Pages (from-to)503-507
Number of pages5
JournalIEEE Signal Processing Letters
Volume28
DOIs
StatePublished - 2021

Keywords

  • Feature fusion
  • frame-wise attention
  • mutual attention
  • temporal action localization

Fingerprint

Dive into the research topics of 'POLO: Learning Explicit Cross-Modality Fusion for Temporal Action Localization'. Together they form a unique fingerprint.

Cite this