Abstract
Deep metric learning aims to learn a discriminative feature space in which features have larger intra-class similarities and smaller inter-class similarities. Most recent studies mainly focus on designing different loss functions or sampling strategies, while ignoring a crucial limitation caused by mini-batch training. We argue that existing mini-batch-based approaches do not explore the global structure similarities among samples in feature space. As a result, instances and their k-nearest neighbors may not be semantically consistent. To this end, we propose a method, dubbed Local Neighborhood Component Analysis (LNCA), to improve deep metric learning. Specifically, LNCA leverages a feature memory bank, storing the feature vectors of all instances, to estimate the global structure similarities and determine the k nearest neighbors of samples in the feature space. Further, in order to refine the local neighborhood components of samples, LNCA introduces a metric to attract the positive neighbors and repulse the negative neighbors simultaneously. LNCA is a plug-and-play module and can be integrated into a general DML framework. Experimental results show that it can boost the generalization performance of existing DML approaches significantly.
| Original language | English |
|---|---|
| Pages (from-to) | 165-176 |
| Number of pages | 12 |
| Journal | Information Sciences |
| Volume | 617 |
| DOIs | |
| State | Published - Dec 2022 |
Keywords
- Feature learning
- Image retrieval
- Low-dimensional embedding
- Metric learning
Fingerprint
Dive into the research topics of 'Improved deep metric learning with local neighborhood component analysis'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver