TY - GEN
T1 - Beyond the Model
T2 - 2023 Workshop on Secure and Trustworthy Deep Learning Systems, SecTL 2023 at AsiaCCS 2023
AU - Sang, Ye
AU - Huang, Yujin
AU - Huang, Shuo
AU - Cui, Helei
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/7/10
Y1 - 2023/7/10
N2 - The increasing popularity of deep learning (DL) models and the advantages of computing, including low latency and bandwidth savings on smartphones, have led to the emergence of intelligent mobile applications, also known as DL apps, in recent years. However, this technological development has also given rise to several security concerns, including adversarial examples, model stealing, and data poisoning issues. Existing works on attacks and countermeasures for on-device DL models have primarily focused on the models themselves. However, scant attention has been paid to the impact of data processing disturbance on the model inference. This knowledge disparity highlights the need for additional research to fully comprehend and address security issues related to data processing for on-device models. In this paper, we introduce a data processing-based attacks against real-world DL apps. In particular, our attack could influence the performance and latency of the model without affecting the operation of a DL app. To demonstrate the effectiveness of our attack, we carry out an empirical study on 517 real-world DL apps collected from Google Play. Among 320 apps utilizing MLkit, we find that 81.56% of them can be successfully attacked. The results emphasize the importance of DL app developers being aware of and taking actions to secure on-device models from the perspective of data processing.
AB - The increasing popularity of deep learning (DL) models and the advantages of computing, including low latency and bandwidth savings on smartphones, have led to the emergence of intelligent mobile applications, also known as DL apps, in recent years. However, this technological development has also given rise to several security concerns, including adversarial examples, model stealing, and data poisoning issues. Existing works on attacks and countermeasures for on-device DL models have primarily focused on the models themselves. However, scant attention has been paid to the impact of data processing disturbance on the model inference. This knowledge disparity highlights the need for additional research to fully comprehend and address security issues related to data processing for on-device models. In this paper, we introduce a data processing-based attacks against real-world DL apps. In particular, our attack could influence the performance and latency of the model without affecting the operation of a DL app. To demonstrate the effectiveness of our attack, we carry out an empirical study on 517 real-world DL apps collected from Google Play. Among 320 apps utilizing MLkit, we find that 81.56% of them can be successfully attacked. The results emphasize the importance of DL app developers being aware of and taking actions to secure on-device models from the perspective of data processing.
UR - http://www.scopus.com/inward/record.url?scp=85168541812&partnerID=8YFLogxK
U2 - 10.1145/3591197.3591308
DO - 10.1145/3591197.3591308
M3 - 会议稿件
AN - SCOPUS:85168541812
T3 - ACM International Conference Proceeding Series
BT - Proceedings of the Inaugural AsiaCCS 2023 Workshop on Secure and Trustworthy Deep Learning Systems, SecTL 2023
PB - Association for Computing Machinery
Y2 - 10 July 2023
ER -