SAST: a suppressing ambiguity self-training framework for facial expression recognition

Zhe Guo, Bingxin Wei, Xuewen Liu, Zhibo Zhang, Shiya Liu, Yangyu Fan

科研成果: 期刊稿件文章同行评审

摘要

Facial expression recognition (FER) suffers from insufficient label information, as human expressions are complex and diverse, with many expressions ambiguous. Using low-quality labels or low-quantity labels will aggravate ambiguity of model predictions and reduce the accuracy of FER. How to improve the robustness of FER to ambiguous data with insufficient information remains challenging. To this end, we propose the Suppressing Ambiguity Self-Training (SAST) framework which is the first attempt to address the problem of insufficient information both label quality and label quantity containing, simultaneously. Specifically, we design an Ambiguous Relative Label Usage (ARLU) strategy that mixes hard labels and soft labels to alleviate the information loss problem caused by hard labels. We also enhance the robustness of the model to ambiguous data by means of Self-Training Resampling (STR). We further use the landmarks and Patch Branch (PB) to enhance the ability of suppressing ambiguity. Experiments on RAF-DB, FERPlus, SFEW, and AffectNet datasets show that our SAST outperforms 6 semi-supervised methods with fewer annotations, and achieves competitive accuracy to State-Of-The-Art (SOTA) FER methods. Our code is available at https://github.com/Liuxww/SAST.

源语言英语
页(从-至)56059-56076
页数18
期刊Multimedia Tools and Applications
83
18
DOI
出版状态已出版 - 5月 2024

指纹

探究 'SAST: a suppressing ambiguity self-training framework for facial expression recognition' 的科研主题。它们共同构成独一无二的指纹。

引用此