摘要
In domain adaptation (DA), label-induced losses generally occupy a dominant position and most previous models regard hard or soft labels as their inputs. However, these two types of labels may mislead the modeling process of label-induced losses since hard label is sensitive to a wrongly-predicted sample while soft label may introduce label noise, thus they may cause negative transfer. To relieve this problem, we propose a novel label learning approach namely confidence regularized label propagation (CRLP) that regularizes the confidence of predicted soft labels with constraints of F-norm or L21-norm. It is validated that maximizing either one of these two constraints equals to minimizing entropy loss. Specially, we illustrate that L21-norm is more suitable for DA than F-norm when the dataset contain a large number of categories. Then, we leverage the regularized soft labels produced by CRLP to reformulate some popular label-induced losses that consider feature transferability and discriminability such as class-wise maximum mean discrepancy, intra-class compactness and inter-class dispersion in a probability manner to present a novel DA method (i.e., CRLP-DA). Comprehensive analysis and experiments on four cross-domain object recognition datasets verify that the proposed CRLP-DA outperforms some state-of-the-art methods, especially 59.5% for Office10+Caltech10 dataset with SURF features. For others to better reproduce, our preliminary Matlab code will be available at https://github.com/WWLoveTransfer/CRLP-DA/.
源语言 | 英语 |
---|---|
页(从-至) | 3319-3333 |
页数 | 15 |
期刊 | IEEE Transactions on Circuits and Systems for Video Technology |
卷 | 32 |
期 | 6 |
DOI | |
出版状态 | 已出版 - 1 6月 2022 |