Logit prototype learning with active multimodal representation for robust open-set recognition

Yimin Fu, Zhunga Liu, Zicheng Wang

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Robust open-set recognition (OSR) performance has become a prerequisite for pattern recognition systems in real-world applications. However, the existing OSR methods are primarily implemented on the basis of single-modal perception, and their performance is limited when single-modal data fail to provide sufficient descriptions of the objects. Although multimodal data can provide more comprehensive information than single-modal data, the learning of decision boundaries can be affected by the feature representation gap between different modalities. To effectively integrate multimodal data for robust OSR performance, we propose logit prototype learning (LPL) with active multimodal representation. In LPL, the input multimodal data are transformed into the logit space, enabling a direct exploration of intermodal correlations without the impact of scale inconsistency. Then, the fusion weights of each modality are determined using an entropybased uncertainty estimation method. This approach realizes adaptive adjustment of the fusion strategy to provide comprehensive descriptions in the presence of external disturbances. Moreover, the single-modal and multimodal representations are jointly optimized interactively to learn discriminative decision boundaries. Finally, a stepwise recognition rule is employed to reduce the misclassification risk and facilitate the distinction between known and unknown classes. Extensive experiments on three multimodal datasets have been done to demonstrate the effectiveness of the proposed method.

Original languageEnglish
Article number162204
JournalScience China Information Sciences
Volume67
Issue number6
DOIs
StatePublished - Jun 2024

Keywords

  • logit prototype learning
  • multimodal perception
  • open-set recognition
  • uncertainty estimation

Fingerprint

Dive into the research topics of 'Logit prototype learning with active multimodal representation for robust open-set recognition'. Together they form a unique fingerprint.

Cite this