TY - JOUR
T1 - Towards Unifying Saliency Transformer for Video Saliency Prediction and Detection
AU - Xiong, Junwen
AU - Li, Chuanyue
AU - Liu, Tianyu
AU - Zhang, Peng
AU - Huo, Yue
AU - Huang, Wei
AU - Zha, Yufei
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Video saliency prediction and detection are thriving research domains that enable computers to simulate the distribution of visual attention akin to how humans perceive dynamic scenes. While many approaches have crafted task-specific training paradigms for either video saliency prediction or video salient object detection tasks, few attention has been devoted to devising a generalized saliency modeling framework that seamlessly bridges both these distinct tasks. In this study, we introduce the Unified Saliency Transformer (UniST) framework, which comprehensively utilizes the essential attributes of video saliency prediction and video salient object detection. In addition to extracting representations of frame sequences, a saliencyaware transformer is designed to learn the spatio-temporal representations at progressively increased resolutions, while incorporating effective cross-scale saliency information to produce a robust representation. Furthermore, task-specific decoders are proposed to perform the final prediction for each task. To the best of our knowledge, this is the first work to explore the design of a unified framework for both saliency modeling tasks. Convincible experiments demonstrate that the proposed UniST achieves superior performance across eight challenging benchmarks for two tasks, outperforming other state-of-the-art methods in most metrics. The project page is https://junwenxiong.github.io/UniST.
AB - Video saliency prediction and detection are thriving research domains that enable computers to simulate the distribution of visual attention akin to how humans perceive dynamic scenes. While many approaches have crafted task-specific training paradigms for either video saliency prediction or video salient object detection tasks, few attention has been devoted to devising a generalized saliency modeling framework that seamlessly bridges both these distinct tasks. In this study, we introduce the Unified Saliency Transformer (UniST) framework, which comprehensively utilizes the essential attributes of video saliency prediction and video salient object detection. In addition to extracting representations of frame sequences, a saliencyaware transformer is designed to learn the spatio-temporal representations at progressively increased resolutions, while incorporating effective cross-scale saliency information to produce a robust representation. Furthermore, task-specific decoders are proposed to perform the final prediction for each task. To the best of our knowledge, this is the first work to explore the design of a unified framework for both saliency modeling tasks. Convincible experiments demonstrate that the proposed UniST achieves superior performance across eight challenging benchmarks for two tasks, outperforming other state-of-the-art methods in most metrics. The project page is https://junwenxiong.github.io/UniST.
KW - Video saliency predcition
KW - unifed saliency transformer
KW - video salient object detection
UR - http://www.scopus.com/inward/record.url?scp=85218731782&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2025.3544031
DO - 10.1109/TCSVT.2025.3544031
M3 - 文章
AN - SCOPUS:85218731782
SN - 1051-8215
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
ER -