摘要
Human–object interactions (HOI) detection aims at capturing human–object pairs in images and predicting their actions. It is an essential step for many visual reasoning tasks, such as VQA, image retrieval and surveillance event detection. The challenge of this task is to tackle the compositional learning problem, especially in a few-shot setting. A straightforward approach is designing a group of dedicated models for each specific pair. However, the maintenance of these independent models is unrealistic due to combinatorial explosion. To address the above problems, we propose a new Conditional Hyper-Adapter (CHA) method based on meta-learning. Different from previous works, our approach regards each <verb, object> as an independent sub-task. Meanwhile, we design two kinds of Hyper-Adapter structures to guide the model to learn “how to address the HOI detection”. By combining the different conditions and hypernetwork, the CHA can adaptively generate partial parameters and improve the representation and generalization ability of the model. Finally, our proposed method can be viewed as a plug-and-play module to boost existing HOI detection models on the widely used HOI benchmarks.
源语言 | 英语 |
---|---|
文章编号 | 111075 |
期刊 | Pattern Recognition |
卷 | 159 |
DOI | |
出版状态 | 已出版 - 3月 2025 |