TY - GEN
T1 - FaceLivePlus
T2 - 2023 ACM International Conference on Multimedia Retrieval, ICMR 2023
AU - Zhang, Ying
AU - Zheng, Lilei
AU - Thing, Vrizlynn L.L.
AU - Zimmermann, Roger
AU - Guo, Bin
AU - Yu, Zhiwen
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/6/12
Y1 - 2023/6/12
N2 - Face verification is a trending way to verify someone's identity in broad applications. But such systems are vulnerable to face spoofing attacks via, for example, a fraudulent copy of a photo, making it necessary to include face liveness detection as an additional safeguard. Among most existing studies, the face liveness detection is realized in a separate machine learning model in addition to the model for face verification. Such a two-model configuration may face challenges when deployed onto platforms with limited computation power and storage (e.g. mobile phone, IoT devices), especially considering each model may have millions of parameters. Inspired by the fact that humans can verify a person's identity and liveness at a single glance from a face, we develop a novel system, named FaceLivePlus, to learn a single and universal face descriptor for the two tasks (face verification and liveness detection) so that the computational workload and storage space can be halved. To achieve this, we formulate the underlying relationship between the two tasks, and seamlessly embed this relationship in a distance ranking deep model. The model directly works on features rather than classification labels, which makes the system well generalized on unseen data. Extensive experiments show that our average half total error rate (HTER) has at least 15% and 8% improvement from the state-of-the-arts on two benchmark datasets. We anticipate this approach could become a new direction for face authentication.
AB - Face verification is a trending way to verify someone's identity in broad applications. But such systems are vulnerable to face spoofing attacks via, for example, a fraudulent copy of a photo, making it necessary to include face liveness detection as an additional safeguard. Among most existing studies, the face liveness detection is realized in a separate machine learning model in addition to the model for face verification. Such a two-model configuration may face challenges when deployed onto platforms with limited computation power and storage (e.g. mobile phone, IoT devices), especially considering each model may have millions of parameters. Inspired by the fact that humans can verify a person's identity and liveness at a single glance from a face, we develop a novel system, named FaceLivePlus, to learn a single and universal face descriptor for the two tasks (face verification and liveness detection) so that the computational workload and storage space can be halved. To achieve this, we formulate the underlying relationship between the two tasks, and seamlessly embed this relationship in a distance ranking deep model. The model directly works on features rather than classification labels, which makes the system well generalized on unseen data. Extensive experiments show that our average half total error rate (HTER) has at least 15% and 8% improvement from the state-of-the-arts on two benchmark datasets. We anticipate this approach could become a new direction for face authentication.
KW - Biometrics
KW - Face authentication
KW - Face liveness detection
KW - Multimedia Forensics
KW - Multimedia Security
KW - Multimedia System
UR - http://www.scopus.com/inward/record.url?scp=85163628115&partnerID=8YFLogxK
U2 - 10.1145/3591106.3592289
DO - 10.1145/3591106.3592289
M3 - 会议稿件
AN - SCOPUS:85163628115
T3 - ICMR 2023 - Proceedings of the 2023 ACM International Conference on Multimedia Retrieval
SP - 144
EP - 152
BT - ICMR 2023 - Proceedings of the 2023 ACM International Conference on Multimedia Retrieval
PB - Association for Computing Machinery, Inc
Y2 - 12 June 2023 through 15 June 2023
ER -