Smooth-Guided Implicit Data Augmentation for Domain Generalization

Mengzhu Wang, Junze Liu, Ge Luo, Shanshan Wang, Wei Wang, Long Lan, Ye Wang, Feiping Nie

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

The training process of a domain generalization (DG) model involves utilizing one or more interrelated source domains to attain optimal performance on an unseen target domain. Existing DG methods often use auxiliary networks or require high computational costs to improve the model&#x2019;s generalization ability by incorporating a diverse set of source domains. In contrast, this work proposes a method called <bold>S</bold>mooth-Guided Implicit Data Augmentation (SGIDA) that operates in the feature space to capture the diversity of source domains. To amplify the model&#x2019;s generalization capacity, a distance metric learning (DML) loss function is incorporated. Additionally, rather than depending on deep features, the suggested approach employs logits produced from cross entropy (CE) losses with infinite augmentations. A theoretical analysis shows that logits are effective in estimating distances defined on original features, and the proposed approach is thoroughly analyzed to provide a better understanding of why logits are beneficial for DG. Moreover, to increase the diversity of the source domain, a sampling-based method called smooth is introduced to obtain semantic directions from interclass relations. The effectiveness of the proposed approach is demonstrated through extensive experiments on widely used DG, object detection, and remote sensing datasets, where it achieves significant improvements over existing state-of-the-art methods across various backbone networks.

源语言英语
页(从-至)1-12
页数12
期刊IEEE Transactions on Neural Networks and Learning Systems
DOI
出版状态已接受/待刊 - 2024

指纹

探究 'Smooth-Guided Implicit Data Augmentation for Domain Generalization' 的科研主题。它们共同构成独一无二的指纹。

引用此