Unsupervised deep hashing for large-scale visual search

Zhaoqiang Xia, Xiaoyi Feng, Jinye Peng, Abdenour Hadid

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

21 Scopus citations

Abstract

Learning based hashing plays a pivotal role in large-scale visual search. However, most existing hashing algorithms tend to learn shallow models that do not seek representative binary codes. In this paper, we propose a novel hashing approach based on unsupervised deep learning to hierarchically transform features into hash codes. Within the heterogeneous deep hashing framework, the autoencoder layers with specific constraints are considered to model the nonlinear mapping between features and binary codes. Then, a Restricted Boltzmann Machine (RBM) layer with constraints is utilized to reduce the dimension in the hamming space. The experiments on the problem of visual search demonstrate the competitiveness of our proposed approach compared to the state of the art.

Original languageEnglish
Title of host publication2016 6th International Conference on Image Processing Theory, Tools and Applications, IPTA 2016
EditorsMatti Pietikainen, Abdenour Hadid, Miguel Bordallo Lopez
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781467389105
DOIs
StatePublished - 17 Jan 2017
Event6th International Conference on Image Processing Theory, Tools and Applications, IPTA 2016 - Oulu, Finland
Duration: 12 Dec 201615 Dec 2016

Publication series

Name2016 6th International Conference on Image Processing Theory, Tools and Applications, IPTA 2016

Conference

Conference6th International Conference on Image Processing Theory, Tools and Applications, IPTA 2016
Country/TerritoryFinland
CityOulu
Period12/12/1615/12/16

Keywords

  • Autoencoder
  • Deep learning
  • Learning based hashing
  • RBM
  • Unsupervised learning

Fingerprint

Dive into the research topics of 'Unsupervised deep hashing for large-scale visual search'. Together they form a unique fingerprint.

Cite this