Multi-Modal Joint Clustering with Application for Unsupervised Attribute Discovery

Liangchen Liu, Feiping Nie, Arnold Wiliem, Zhihui Li, Teng Zhang, Brian C. Lovell

Research output: Contribution to journalArticlepeer-review

38 Scopus citations

Abstract

Utilizing multiple descriptions/views of an object is often useful in image clustering tasks. Despite many works that have been proposed to effectively cluster multi-view data, there are still unaddressed problems such as the errors introduced by the traditional spectral-based clustering methods due to the two disjoint stages: 1) eigendecomposition and 2) the discretization of new representations. In this paper, we propose a unified clustering framework which jointly learns the two stages together as well as utilizing multiple descriptions of the data. More specifically, two learning methods from this framework are proposed: 1) through a graph construction from different views and 2) through combining multiple graphs. Furthermore, benefiting from the separability and local graph preserving properties of the proposed methods, a novel unsupervised automatic attribute discovery method is proposed. We validate the efficacy of our methods on five data sets, showing that the proposed joint learning clustering methods outperform the recent state-of-the-art methods. We also show that it is possible to derive a novel method to address the unsupervised automatic attribute discovery tasks.

Original languageEnglish
Pages (from-to)4345-4356
Number of pages12
JournalIEEE Transactions on Image Processing
Volume27
Issue number9
DOIs
StatePublished - Sep 2018

Keywords

  • attribute
  • Image clustering
  • unsupervised
  • unsupervised automatic attribute discovery

Fingerprint

Dive into the research topics of 'Multi-Modal Joint Clustering with Application for Unsupervised Attribute Discovery'. Together they form a unique fingerprint.

Cite this