Multimodal feature fusion for 3D shape recognition and retrieval

Shuhui Bu, Shaoguang Cheng, Zhenbao Liu, Junwei Han

科研成果: 期刊稿件文章同行评审

16 引用 (Scopus)

摘要

Three-dimensional shapes contain different kinds of information that jointly characterize the shape. Traditional methods, however, perform recognition or retrieval using only one type. This article presents a 3D feature learning framework that combines different modality data effectively to promote the discriminability of unimodal features. Two independent deep belief networks (DBNs) are employed to learn high-level features from low-level features, and a restricted Boltzmann machine (RBM) is trained for mining the deep correlations between the different modalities. Experiments demonstrate that the proposed method can achieve better performance.

源语言英语
文章编号52
页(从-至)38-46
页数9
期刊IEEE Multimedia
21
4
DOI
出版状态已出版 - 1 10月 2014

指纹

探究 'Multimodal feature fusion for 3D shape recognition and retrieval' 的科研主题。它们共同构成独一无二的指纹。

引用此