Deep binary reconstruction for cross-modal hashing

Di Hu, Feiping Nie, Xuelong Li

科研成果: 期刊稿件文章同行评审

120 引用 (Scopus)

摘要

To satisfy the huge storage space and organization capacity requirements in addressing big multimodal data, hashing techniques have been widely employed to learn binary representations in cross-modal retrieval tasks. However, optimizing the hashing objective under the necessary binary constraint is truly a difficult problem. A common strategy is to relax the constraint and perform individual binarizations over the learned real-valued representations. In this paper, in contrast to conventional two-stage methods, we propose to directly learn the binary codes, where the model can be easily optimized by a standard gradient descent optimizer. However, before that, we present a theoretical guarantee of the effectiveness of the multimodal network in preserving the inter-and intra-modal consistencies. Based on this guarantee, a novel multimodal deep binary reconstruction model is proposed, which can be trained to simultaneously model the correlation across modalities and learn the binary hashing codes. To generate binary codes and to avoid the tiny gradient problem, a novel activation function first scales the input activations to suitable scopes and, then, feeds them to the tanh function to build the hashing layer. Such a composite function is named adaptive tanh. Both linear and nonlinear scaling methods are proposed and shown to generate efficient codes after training the network. Extensive ablation studies and comparison experiments are conducted for the image2text and text2image retrieval tasks; the method is found to outperform several state-of-The-Art deep-learning methods with respect to different evaluation metrics.

源语言英语
文章编号8447211
页(从-至)973-985
页数13
期刊IEEE Transactions on Multimedia
21
4
DOI
出版状态已出版 - 4月 2019

指纹

探究 'Deep binary reconstruction for cross-modal hashing' 的科研主题。它们共同构成独一无二的指纹。

引用此