Learning 2D Invariant Affordance Knowledge for 3D Affordance Grounding

Xianqiang Gao, Pingrui Zhang, Delin Qu, Dong Wang, Zhigang Wang, Yan Ding, Bin Zhao

科研成果: 期刊稿件会议文章同行评审

摘要

3D Object Affordance Grounding aims to predict the functional regions on a 3D object and lays the foundation for a wide range of applications in robotics. Recent advances tackle this problem via learning a mapping between 3D regions and a single human-object interaction image. However, the geometric structure of the 3D object and the object in the human-object interaction image are not always consistent, leading to poor generalization. To address this issue, we propose to learn generalizable invariant affordance knowledge from multiple human-object interaction images within the same affordance category. Specifically, we introduce the Multi-Image Guided Invariant-Feature-Aware 3D Affordance Grounding (MIFAG) framework. It grounds 3D object affordance regions by identifying common interaction patterns across multiple human-object interaction images. First, the Invariant Affordance Knowledge Extraction Module (IAM) utilizes an iterative updating strategy to gradually extract aligned affordance knowledge from multiple images and integrate it into an affordance dictionary. Then, the Affordance Dictionary Adaptive Fusion Module (ADM) learns comprehensive point cloud representations that consider all affordance candidates in multiple images. In addition, the Multi-Image and Point Affordance (MIPA) benchmark is constructed and our method outperforms existing state-of-the-art methods in various experimental comparisons.

源语言英语
页(从-至)3095-3103
页数9
期刊Proceedings of the AAAI Conference on Artificial Intelligence
39
3
DOI
出版状态已出版 - 11 4月 2025
活动39th Annual AAAI Conference on Artificial Intelligence, AAAI 2025 - Philadelphia, 美国
期限: 25 2月 20254 3月 2025

指纹

探究 'Learning 2D Invariant Affordance Knowledge for 3D Affordance Grounding' 的科研主题。它们共同构成独一无二的指纹。

引用此