Generalized Weakly Supervised Object Localization

Dingwen Zhang, Guangyu Guo, Wenyuan Zeng, Lei Li, Junwei Han

Research output: Contribution to journalArticlepeer-review

19 Scopus citations

Abstract

With the goal of learning to localize specific object semantics using the low-cost image-level annotation, weakly supervised object localization (WSOL) has been receiving increasing attention in recent years. Although existing literatures have studied a number of major issues in this field, one important yet challenging scenario, where the test object semantics may appear in the training phase (seen categories) or never been observed before (unseen categories), is still beyond the exploration of the existing works. We define this scenario as the generalized WSOL (GWSOL) and make a pioneering effort to study it in this article. By leveraging attribute vectors to associate seen and unseen categories, we involve threefold modeling components, i.e., the class-sensitive modeling, semantic-agnostic modeling, and content-aware modeling, into a unified end-to-end learning framework. Such design enables our model to recognize and localize unconstrained object semantics, learn compact and discriminative features that could represent the potential unseen categories, and customize content-aware attribute weights to avoid localizing on misleading attribute elements. To advance this research direction, we contribute the bounding-box manual annotations to the widely used AwA2 dataset and benchmark the GWSOL methods. Comprehensive experiments demonstrate the effectiveness of our proposed learning framework and each of the considered modeling components.

Original languageEnglish
Pages (from-to)5395-5406
Number of pages12
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number4
DOIs
StatePublished - 1 Apr 2024

Keywords

  • Object localization
  • unseen object category
  • weakly supervised learning

Fingerprint

Dive into the research topics of 'Generalized Weakly Supervised Object Localization'. Together they form a unique fingerprint.

Cite this