POLO: Learning Explicit Cross-Modality Fusion for Temporal Action Localization

Binglu Wang, Le Yang, Yongqiang Zhao

科研成果: 期刊稿件文章同行评审

28 引用 (Scopus)

摘要

Temporal action localization aims at discovering action instances in untrimmed videos, where RGB and flow are two widely used feature modalities. Specifically, RGB chiefly reveals appearance and flow mainly depicts motion. Given RGB and flow features, previous methods employ the early fusion or late fusion paradigm to mine the complementarity between them. By concatenating raw RGB and flow features, the early fusion implicitly achieved complementarity by the network, but it partly discards the particularity of each modality. The late fusion independently maintains two branches to explore the particularity of each modality, but it only fuses the localization results, which is insufficient to mine the complementarity. In this work, we propose explicit cross-modality fusion (POLO) to effectively utilize the complementarity between two modalities and thoroughly explore the particularity of each modality. POLO performs cross-modality fusion via estimating the attention weight from RGB modality and employing it to flow modality (vice versa). This bridges the complementarity of one modality to supply the other. Assisted with the attention weight, POLO independently learns from RGB and flow features and explores the particularity of each modality. Extensive experiments on two benchmarks demonstrate the preferable performance of POLO.

源语言英语
文章编号9362259
页(从-至)503-507
页数5
期刊IEEE Signal Processing Letters
28
DOI
出版状态已出版 - 2021

指纹

探究 'POLO: Learning Explicit Cross-Modality Fusion for Temporal Action Localization' 的科研主题。它们共同构成独一无二的指纹。

引用此