TY - GEN
T1 - Generating Out-of-Distribution Examples via Feature Crossover against Overconfidence Issue
AU - Ma, Xiaolong
AU - Zhu, Peican
AU - Peng, Weilong
AU - Tang, Keke
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The problem of overconfident predictions on out-of-distribution (OOD) samples poses a significant challenge to the reliability and robustness of deep neural networks (DNNs). The reason for OOD overconfidence issue is that DNNs only learn the features of in-distribution (ID) samples in the training stage, while lacking the supervisory signal of OOD samples. Therefore, we can alleviate this problem by constructing an auxiliary OOD dataset. If the auxiliary OOD dataset belongs to OOD and is close to ID, it can teach the model more OOD knowledge, and further distinguish OOD samples from ID, which is manifested by the model outputting more evenly distributed confidence score for OOD samples. The key to solving this problem lies in how to construct such a high-quality auxiliary OOD dataset. In this paper, we have made preliminary explorations into the fusion of high-level features between samples, and proposed a simple but efficient method of generating OOD samples through feature crossover in the feature space. This method only requires ID data to generate a large number of OOD samples. Experimental results have demonstrated that using our method to construct OOD samples can effectively alleviate the problem of DNNs being overly confident on OOD samples.
AB - The problem of overconfident predictions on out-of-distribution (OOD) samples poses a significant challenge to the reliability and robustness of deep neural networks (DNNs). The reason for OOD overconfidence issue is that DNNs only learn the features of in-distribution (ID) samples in the training stage, while lacking the supervisory signal of OOD samples. Therefore, we can alleviate this problem by constructing an auxiliary OOD dataset. If the auxiliary OOD dataset belongs to OOD and is close to ID, it can teach the model more OOD knowledge, and further distinguish OOD samples from ID, which is manifested by the model outputting more evenly distributed confidence score for OOD samples. The key to solving this problem lies in how to construct such a high-quality auxiliary OOD dataset. In this paper, we have made preliminary explorations into the fusion of high-level features between samples, and proposed a simple but efficient method of generating OOD samples through feature crossover in the feature space. This method only requires ID data to generate a large number of OOD samples. Experimental results have demonstrated that using our method to construct OOD samples can effectively alleviate the problem of DNNs being overly confident on OOD samples.
KW - deep learning
KW - out-of-distribution detection
KW - virtual outlier synthesis
UR - http://www.scopus.com/inward/record.url?scp=85185002541&partnerID=8YFLogxK
U2 - 10.1109/ICICN59530.2023.10393853
DO - 10.1109/ICICN59530.2023.10393853
M3 - 会议稿件
AN - SCOPUS:85185002541
T3 - ICICN 2023 - 2023 IEEE 11th International Conference on Information, Communication and Networks
SP - 808
EP - 812
BT - ICICN 2023 - 2023 IEEE 11th International Conference on Information, Communication and Networks
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE 11th International Conference on Information, Communication and Networks, ICICN 2023
Y2 - 17 August 2023 through 20 August 2023
ER -