TY - JOUR
T1 - Deep binary reconstruction for cross-modal hashing
AU - Hu, Di
AU - Nie, Feiping
AU - Li, Xuelong
N1 - Publisher Copyright:
© 1999-2012 IEEE.
PY - 2019/4
Y1 - 2019/4
N2 - To satisfy the huge storage space and organization capacity requirements in addressing big multimodal data, hashing techniques have been widely employed to learn binary representations in cross-modal retrieval tasks. However, optimizing the hashing objective under the necessary binary constraint is truly a difficult problem. A common strategy is to relax the constraint and perform individual binarizations over the learned real-valued representations. In this paper, in contrast to conventional two-stage methods, we propose to directly learn the binary codes, where the model can be easily optimized by a standard gradient descent optimizer. However, before that, we present a theoretical guarantee of the effectiveness of the multimodal network in preserving the inter-and intra-modal consistencies. Based on this guarantee, a novel multimodal deep binary reconstruction model is proposed, which can be trained to simultaneously model the correlation across modalities and learn the binary hashing codes. To generate binary codes and to avoid the tiny gradient problem, a novel activation function first scales the input activations to suitable scopes and, then, feeds them to the tanh function to build the hashing layer. Such a composite function is named adaptive tanh. Both linear and nonlinear scaling methods are proposed and shown to generate efficient codes after training the network. Extensive ablation studies and comparison experiments are conducted for the image2text and text2image retrieval tasks; the method is found to outperform several state-of-The-Art deep-learning methods with respect to different evaluation metrics.
AB - To satisfy the huge storage space and organization capacity requirements in addressing big multimodal data, hashing techniques have been widely employed to learn binary representations in cross-modal retrieval tasks. However, optimizing the hashing objective under the necessary binary constraint is truly a difficult problem. A common strategy is to relax the constraint and perform individual binarizations over the learned real-valued representations. In this paper, in contrast to conventional two-stage methods, we propose to directly learn the binary codes, where the model can be easily optimized by a standard gradient descent optimizer. However, before that, we present a theoretical guarantee of the effectiveness of the multimodal network in preserving the inter-and intra-modal consistencies. Based on this guarantee, a novel multimodal deep binary reconstruction model is proposed, which can be trained to simultaneously model the correlation across modalities and learn the binary hashing codes. To generate binary codes and to avoid the tiny gradient problem, a novel activation function first scales the input activations to suitable scopes and, then, feeds them to the tanh function to build the hashing layer. Such a composite function is named adaptive tanh. Both linear and nonlinear scaling methods are proposed and shown to generate efficient codes after training the network. Extensive ablation studies and comparison experiments are conducted for the image2text and text2image retrieval tasks; the method is found to outperform several state-of-The-Art deep-learning methods with respect to different evaluation metrics.
KW - Cross-modal hashing
KW - binary reconstruction
UR - http://www.scopus.com/inward/record.url?scp=85052699272&partnerID=8YFLogxK
U2 - 10.1109/TMM.2018.2866771
DO - 10.1109/TMM.2018.2866771
M3 - 文章
AN - SCOPUS:85052699272
SN - 1520-9210
VL - 21
SP - 973
EP - 985
JO - IEEE Transactions on Multimedia
JF - IEEE Transactions on Multimedia
IS - 4
M1 - 8447211
ER -