TY - GEN
T1 - Dual Supervised Contrastive Learning Based on Perturbation Uncertainty for Online Class Incremental Learning
AU - Su, Shibin
AU - Chen, Zhaojie
AU - Liang, Guoqiang
AU - Zhang, Shizhou
AU - Zhang, Yanning
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
PY - 2025
Y1 - 2025
N2 - To keep learning knowledge from a data stream with changing distribution, continual learning has attracted lots of interests recently. Among its various settings, online class-incremental learning (OCIL) is more realistic and challenging since the data can be used only once. Currently, by employing a buffer to store a few old samples, replay-based methods have obtained huge success and dominated this area. Due to the single pass property of OCIL, how to retrieve high-valued samples from memory is very important. In most of the current works, the logits from the last fully connected layer are used to estimate the value of samples. However, the imbalance between the number of samples for old and new classes leads to a severe bias of the FC layer, which results in an inaccurate estimation. Moreover, this bias also brings about abrupt feature change. To address this problem, we propose a dual supervised contrastive learning method based on perturbation uncertainty. Specifically, we retrieve samples that have not been learned adequately based on perturbation uncertainty. Retraining such samples helps the model to learn robust features. Then, we combine two types of supervised contrastive loss to replace the cross-entropy loss, which further enhances the feature robustness and alleviates abrupt feature changes. Extensive experiments on three popular datasets demonstrate that our method surpasses several recently published works.
AB - To keep learning knowledge from a data stream with changing distribution, continual learning has attracted lots of interests recently. Among its various settings, online class-incremental learning (OCIL) is more realistic and challenging since the data can be used only once. Currently, by employing a buffer to store a few old samples, replay-based methods have obtained huge success and dominated this area. Due to the single pass property of OCIL, how to retrieve high-valued samples from memory is very important. In most of the current works, the logits from the last fully connected layer are used to estimate the value of samples. However, the imbalance between the number of samples for old and new classes leads to a severe bias of the FC layer, which results in an inaccurate estimation. Moreover, this bias also brings about abrupt feature change. To address this problem, we propose a dual supervised contrastive learning method based on perturbation uncertainty. Specifically, we retrieve samples that have not been learned adequately based on perturbation uncertainty. Retraining such samples helps the model to learn robust features. Then, we combine two types of supervised contrastive loss to replace the cross-entropy loss, which further enhances the feature robustness and alleviates abrupt feature changes. Extensive experiments on three popular datasets demonstrate that our method surpasses several recently published works.
KW - Online class-incremental learning
KW - Perturbation uncertainty retrieval
KW - Supervised contrastive learning
UR - http://www.scopus.com/inward/record.url?scp=85213314799&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-78189-6_3
DO - 10.1007/978-3-031-78189-6_3
M3 - 会议稿件
AN - SCOPUS:85213314799
SN - 9783031781889
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 32
EP - 47
BT - Pattern Recognition - 27th International Conference, ICPR 2024, Proceedings
A2 - Antonacopoulos, Apostolos
A2 - Chaudhuri, Subhasis
A2 - Chellappa, Rama
A2 - Liu, Cheng-Lin
A2 - Bhattacharya, Saumik
A2 - Pal, Umapada
PB - Springer Science and Business Media Deutschland GmbH
T2 - 27th International Conference on Pattern Recognition, ICPR 2024
Y2 - 1 December 2024 through 5 December 2024
ER -