Multimodal feature fusion for 3D shape recognition and retrieval

Shuhui Bu, Shaoguang Cheng, Zhenbao Liu, Junwei Han

Research output: Contribution to journalArticlepeer-review

16 Scopus citations

Abstract

Three-dimensional shapes contain different kinds of information that jointly characterize the shape. Traditional methods, however, perform recognition or retrieval using only one type. This article presents a 3D feature learning framework that combines different modality data effectively to promote the discriminability of unimodal features. Two independent deep belief networks (DBNs) are employed to learn high-level features from low-level features, and a restricted Boltzmann machine (RBM) is trained for mining the deep correlations between the different modalities. Experiments demonstrate that the proposed method can achieve better performance.

Original languageEnglish
Article number52
Pages (from-to)38-46
Number of pages9
JournalIEEE Multimedia
Volume21
Issue number4
DOIs
StatePublished - 1 Oct 2014

Keywords

  • Accuracy
  • Deep learning
  • Feature extraction
  • Fusion
  • Learning systems
  • Multimedia
  • Multimodal feature fusion
  • Research and development
  • Shape analysis
  • Shape recognition
  • Shape retrieval
  • Solid modeling
  • Three-dimensional displays

Fingerprint

Dive into the research topics of 'Multimodal feature fusion for 3D shape recognition and retrieval'. Together they form a unique fingerprint.

Cite this