TY - GEN
T1 - EarPass
T2 - 44th IEEE International Conference on Distributed Computing Systems, ICDCS 2024
AU - Xie, Yanze
AU - Gao, Mengzhen
AU - Liu, Xiaoning
AU - Huana, Shuo
AU - Cui, Helei
AU - Yu, Zhiwen
AU - Guo, Bin
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - With the growing reliance on digital systems in today's mobile Internet era, robust authentication methods are crucial for safeguarding personal data and controlling access to resources. Conventional methods, such as knowledge-based and biometric-based authentication, are widely used but still have some usage limitations and potential security concerns, like wearing protective suits/masks or being imitated by attackers with ulterior motives. In this paper, we propose another earphone-based authentication system, namely EarPass, that leverages users' unique head motion patterns in response to a very short period of music segment. Here, we employ a Convolutional Neural Network (CNN)-based feature extractor to capture and map distinct head motions into a well-separated latent space, achieving high-dimensional data extraction. We demonstrate the consistency, uniqueness, and robustness of head motion patterns through extensive experiments and reach a 98.2% F1-score, indicating superior performance compared to conventional authentication methods. Additionally, EarPass is user-friendly, secure, and adaptable to various environments, including noisy and movement-oriented scenarios. By integrating the authentication system into Android devices, we showcase its real-world applicability and low energy consumption with minimal latency. The source code of EarPass will be open-source to further research and collaboration within the community.
AB - With the growing reliance on digital systems in today's mobile Internet era, robust authentication methods are crucial for safeguarding personal data and controlling access to resources. Conventional methods, such as knowledge-based and biometric-based authentication, are widely used but still have some usage limitations and potential security concerns, like wearing protective suits/masks or being imitated by attackers with ulterior motives. In this paper, we propose another earphone-based authentication system, namely EarPass, that leverages users' unique head motion patterns in response to a very short period of music segment. Here, we employ a Convolutional Neural Network (CNN)-based feature extractor to capture and map distinct head motions into a well-separated latent space, achieving high-dimensional data extraction. We demonstrate the consistency, uniqueness, and robustness of head motion patterns through extensive experiments and reach a 98.2% F1-score, indicating superior performance compared to conventional authentication methods. Additionally, EarPass is user-friendly, secure, and adaptable to various environments, including noisy and movement-oriented scenarios. By integrating the authentication system into Android devices, we showcase its real-world applicability and low energy consumption with minimal latency. The source code of EarPass will be open-source to further research and collaboration within the community.
KW - earphones
KW - head motion
KW - ubiquitous computing
KW - user authentication
UR - http://www.scopus.com/inward/record.url?scp=85203168942&partnerID=8YFLogxK
U2 - 10.1109/ICDCS60910.2024.00121
DO - 10.1109/ICDCS60910.2024.00121
M3 - 会议稿件
AN - SCOPUS:85203168942
T3 - Proceedings - International Conference on Distributed Computing Systems
SP - 1283
EP - 1293
BT - Proceedings - 2024 IEEE 44th International Conference on Distributed Computing Systems, ICDCS 2024
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 23 July 2024 through 26 July 2024
ER -