CHA: Conditional Hyper-Adapter method for detecting human–object interaction

Mengyang Sun, Wei Suo, Ji Wang, Peng Wang, Yanning Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

Human–object interactions (HOI) detection aims at capturing human–object pairs in images and predicting their actions. It is an essential step for many visual reasoning tasks, such as VQA, image retrieval and surveillance event detection. The challenge of this task is to tackle the compositional learning problem, especially in a few-shot setting. A straightforward approach is designing a group of dedicated models for each specific pair. However, the maintenance of these independent models is unrealistic due to combinatorial explosion. To address the above problems, we propose a new Conditional Hyper-Adapter (CHA) method based on meta-learning. Different from previous works, our approach regards each <verb, object> as an independent sub-task. Meanwhile, we design two kinds of Hyper-Adapter structures to guide the model to learn “how to address the HOI detection”. By combining the different conditions and hypernetwork, the CHA can adaptively generate partial parameters and improve the representation and generalization ability of the model. Finally, our proposed method can be viewed as a plug-and-play module to boost existing HOI detection models on the widely used HOI benchmarks.

Original languageEnglish
Article number111075
JournalPattern Recognition
Volume159
DOIs
StatePublished - Mar 2025

Keywords

  • Human–object interaction detection
  • Hypernetwork
  • Meta learning

Fingerprint

Dive into the research topics of 'CHA: Conditional Hyper-Adapter method for detecting human–object interaction'. Together they form a unique fingerprint.

Cite this