Modeling the Distributional Uncertainty for Salient Object Detection Models

Xinyu Tian, Jing Zhang, Mochu Xiang, Yuchao Dai

科研成果: 期刊稿件会议文章同行评审

19 引用 (Scopus)

摘要

Most of the existing salient object detection (SOD) models focus on improving the overall model performance, without explicitly explaining the discrepancy between the training and testing distributions. In this paper, we investigate a particular type of epistemic uncertainty, namely distributional uncertainty, for salient object detection. Specifically, for the first time, we explore the existing class-aware distribution gap exploration techniques, i.e. long-tail learning, single-model uncertainty modeling and test-time strategies, and adapt them to model the distributional uncertainty for our class-agnostic task. We define test sample that is dissimilar to the training dataset as being “out-of-distribution” (OOD) samples. Different from the conventional OOD definition, where OOD samples are those not belonging to the closed-world training categories, OOD samples for SOD are those break the basic priors of saliency, i.e. center prior, color contrast prior, compactness prior and etc., indicating OOD as being “continuous” instead of being discrete for our task. We’ve carried out extensive experimental results to verify effectiveness of existing distribution gap modeling techniques for SOD, and conclude that both train-time single-model uncertainty estimation techniques and weight-regularization solutions that preventing model activation from drifting too much are promising directions for modeling distributional uncertainty for SOD.

源语言英语
页(从-至)19660-19670
页数11
期刊Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
2023-June
DOI
出版状态已出版 - 2023
活动2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Vancouver, 加拿大
期限: 18 6月 202322 6月 2023

指纹

探究 'Modeling the Distributional Uncertainty for Salient Object Detection Models' 的科研主题。它们共同构成独一无二的指纹。

引用此