TY - JOUR
T1 - CompactNet
T2 - learning a compact space for face presentation attack detection
AU - Li, Lei
AU - Xia, Zhaoqiang
AU - Jiang, Xiaoyue
AU - Roli, Fabio
AU - Feng, Xiaoyi
N1 - Publisher Copyright:
© 2020 Elsevier B.V.
PY - 2020/10/7
Y1 - 2020/10/7
N2 - Face presentation attack detection (PAD) has become a clear and present threat for face recognition systems and many countermeasures have been proposed to mitigate it. In these countermeasures, some of them use the features directly extracted from well-known color spaces (e.g., RGB, HSV and YCbCr) to distinguish the fake face images from the genuine (“live”) ones. However, the existing color spaces have been originally designed for displaying the visual content of images or videos with high fidelity and are not well suited for directly discriminating the live and fake face images. Therefore, in this paper, we propose a deep-learning system, called CompactNet, for learning a compact space tailored for face PAD. More specifically, the proposed CompactNet does not directly extract the features in existing color spaces, but inputs the color face image into a layer-by-layer progressive space generator. Then, under the optimization of the “points-to-center” triplet loss function, the generator learns a compact space with small intra-class distance, large inter-class distance and a safe interval between different classes. Finally, the feature of the image in compact space is extracted by a pre-trained feature extractor and used for image classification. Reported experiments on three publicly available face PAD databases, namely, the Replay-Attack, the OULU-NPU and the HKBU-MARs V1, show that CompactNet separates very well the two classes of fake and genuine faces and significantly outperforms the state-of-the-art methods for PAD.
AB - Face presentation attack detection (PAD) has become a clear and present threat for face recognition systems and many countermeasures have been proposed to mitigate it. In these countermeasures, some of them use the features directly extracted from well-known color spaces (e.g., RGB, HSV and YCbCr) to distinguish the fake face images from the genuine (“live”) ones. However, the existing color spaces have been originally designed for displaying the visual content of images or videos with high fidelity and are not well suited for directly discriminating the live and fake face images. Therefore, in this paper, we propose a deep-learning system, called CompactNet, for learning a compact space tailored for face PAD. More specifically, the proposed CompactNet does not directly extract the features in existing color spaces, but inputs the color face image into a layer-by-layer progressive space generator. Then, under the optimization of the “points-to-center” triplet loss function, the generator learns a compact space with small intra-class distance, large inter-class distance and a safe interval between different classes. Finally, the feature of the image in compact space is extracted by a pre-trained feature extractor and used for image classification. Reported experiments on three publicly available face PAD databases, namely, the Replay-Attack, the OULU-NPU and the HKBU-MARs V1, show that CompactNet separates very well the two classes of fake and genuine faces and significantly outperforms the state-of-the-art methods for PAD.
KW - Biometrics
KW - Compact space
KW - Deep learning
KW - Face PAD
UR - http://www.scopus.com/inward/record.url?scp=85086395438&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2020.05.017
DO - 10.1016/j.neucom.2020.05.017
M3 - 文章
AN - SCOPUS:85086395438
SN - 0925-2312
VL - 409
SP - 191
EP - 207
JO - Neurocomputing
JF - Neurocomputing
ER -