TY - GEN
T1 - A Multi-Teacher Assisted Knowledge Distillation Approach for Enhanced Face Image Authentication
AU - Cheng, Tiancong
AU - Zhang, Ying
AU - Yin, Yifang
AU - Zimmermann, Roger
AU - Yu, Zhiwen
AU - Guo, Bin
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/6/12
Y1 - 2023/6/12
N2 - Recent deep-learning-based face recognition systems have achieved significant success. However, most existing face recognition systems are vulnerable to spoofing attacks where a copy of the face image is used to deceive the authentication. A number of solutions are developed to overcome this problem by building a separate face anti-spoofing model, which however brings in additional storage and computation requirements. Since both recognition and face anti-spoofing tasks stem from the analysis of the same face image, this paper explores a unified approach to reduce the original dual-model redundancy. To this end, we introduce a compressed multi-task model to simultaneously perform both tasks in a lightweight manner, which has the potential to benefit lightweight IoT applications. Concretely, we regard the original two single-task deep models as teacher networks and propose a novel multi-teacher-assisted knowledge distillation method to guide our lightweight multi-task model to achieve satisfying performance on both tasks. Additionally, to reduce the large gap between the deep teachers and the light student, a comprehensive feature alignment is further integrated by distilling multi-layer features. Extensive experiments are carried out on two benchmark datasets, where we achieve the task accuracy of 93% meanwhile reducing the model size by 97% and reducing the inference time by 56% compared to the original dual-model.
AB - Recent deep-learning-based face recognition systems have achieved significant success. However, most existing face recognition systems are vulnerable to spoofing attacks where a copy of the face image is used to deceive the authentication. A number of solutions are developed to overcome this problem by building a separate face anti-spoofing model, which however brings in additional storage and computation requirements. Since both recognition and face anti-spoofing tasks stem from the analysis of the same face image, this paper explores a unified approach to reduce the original dual-model redundancy. To this end, we introduce a compressed multi-task model to simultaneously perform both tasks in a lightweight manner, which has the potential to benefit lightweight IoT applications. Concretely, we regard the original two single-task deep models as teacher networks and propose a novel multi-teacher-assisted knowledge distillation method to guide our lightweight multi-task model to achieve satisfying performance on both tasks. Additionally, to reduce the large gap between the deep teachers and the light student, a comprehensive feature alignment is further integrated by distilling multi-layer features. Extensive experiments are carried out on two benchmark datasets, where we achieve the task accuracy of 93% meanwhile reducing the model size by 97% and reducing the inference time by 56% compared to the original dual-model.
KW - face anti-spoofing
KW - face authentication
KW - face recognition
KW - knowledge distillation
KW - model compression
UR - http://www.scopus.com/inward/record.url?scp=85163693193&partnerID=8YFLogxK
U2 - 10.1145/3591106.3592280
DO - 10.1145/3591106.3592280
M3 - 会议稿件
AN - SCOPUS:85163693193
T3 - ICMR 2023 - Proceedings of the 2023 ACM International Conference on Multimedia Retrieval
SP - 135
EP - 143
BT - ICMR 2023 - Proceedings of the 2023 ACM International Conference on Multimedia Retrieval
PB - Association for Computing Machinery, Inc
T2 - 2023 ACM International Conference on Multimedia Retrieval, ICMR 2023
Y2 - 12 June 2023 through 15 June 2023
ER -