Abstract
In domain adaptation (DA), label-induced losses generally occupy a dominant position and most previous models regard hard or soft labels as their inputs. However, these two types of labels may mislead the modeling process of label-induced losses since hard label is sensitive to a wrongly-predicted sample while soft label may introduce label noise, thus they may cause negative transfer. To relieve this problem, we propose a novel label learning approach namely confidence regularized label propagation (CRLP) that regularizes the confidence of predicted soft labels with constraints of F-norm or L21-norm. It is validated that maximizing either one of these two constraints equals to minimizing entropy loss. Specially, we illustrate that L21-norm is more suitable for DA than F-norm when the dataset contain a large number of categories. Then, we leverage the regularized soft labels produced by CRLP to reformulate some popular label-induced losses that consider feature transferability and discriminability such as class-wise maximum mean discrepancy, intra-class compactness and inter-class dispersion in a probability manner to present a novel DA method (i.e., CRLP-DA). Comprehensive analysis and experiments on four cross-domain object recognition datasets verify that the proposed CRLP-DA outperforms some state-of-the-art methods, especially 59.5% for Office10+Caltech10 dataset with SURF features. For others to better reproduce, our preliminary Matlab code will be available at https://github.com/WWLoveTransfer/CRLP-DA/.
Original language | English |
---|---|
Pages (from-to) | 3319-3333 |
Number of pages | 15 |
Journal | IEEE Transactions on Circuits and Systems for Video Technology |
Volume | 32 |
Issue number | 6 |
DOIs | |
State | Published - 1 Jun 2022 |
Keywords
- Domain adaptation
- F/L-norm
- entropy loss
- hard label
- label propagation
- label-induced losses
- soft label