TY - JOUR
T1 - Bel
T2 - Batch Equalization Loss for scene graph generation
AU - Li, Huihui
AU - Liu, Baorong
AU - Wu, Dongqing
AU - Liu, Hang
AU - Guo, Lei
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
PY - 2023/11
Y1 - 2023/11
N2 - Since scene graph can be used as the basis of many high-level vision semantic tasks, scene graph generation has attracted more and more attention of researchers, but most works are limited by the long-tail distribution of dataset, tend to predict those frequent but uninformative predicates such as “on” and “of.” From a novel perspective, we found that in the training process, the model would promote the categories included in the batch, while suppressing the categories not in the batch. The long-tailed distribution of the data leads to the continuous suppression of tail categories, thus results in the model bias. In order to solve the problem above, we propose a novel simple and effective method named Batch Equalization Loss, which can be applied to most of the existing models and can bring effective improvement with only a few changes. It is worth noting that our method can achieve a more significant improvement on small batches than on big batches. Extensive experiments on the VG150 dataset show that our work can bring significant improvement on the basis of existing works. Code will be available at GitHub in the near future.
AB - Since scene graph can be used as the basis of many high-level vision semantic tasks, scene graph generation has attracted more and more attention of researchers, but most works are limited by the long-tail distribution of dataset, tend to predict those frequent but uninformative predicates such as “on” and “of.” From a novel perspective, we found that in the training process, the model would promote the categories included in the batch, while suppressing the categories not in the batch. The long-tailed distribution of the data leads to the continuous suppression of tail categories, thus results in the model bias. In order to solve the problem above, we propose a novel simple and effective method named Batch Equalization Loss, which can be applied to most of the existing models and can bring effective improvement with only a few changes. It is worth noting that our method can achieve a more significant improvement on small batches than on big batches. Extensive experiments on the VG150 dataset show that our work can bring significant improvement on the basis of existing works. Code will be available at GitHub in the near future.
KW - Equalization loss function
KW - Long-tailed data
KW - Scene graph generation
KW - Training batch
UR - http://www.scopus.com/inward/record.url?scp=85173722656&partnerID=8YFLogxK
U2 - 10.1007/s10044-023-01199-z
DO - 10.1007/s10044-023-01199-z
M3 - 文章
AN - SCOPUS:85173722656
SN - 1433-7541
VL - 26
SP - 1821
EP - 1831
JO - Pattern Analysis and Applications
JF - Pattern Analysis and Applications
IS - 4
ER -