TY - JOUR
T1 - 一种多终端视频流智能识别模型共进演化方法研究
AU - Wang, Le Hao
AU - Liu, Si Cong
AU - Yu, Zhi Wen
AU - Yu, Hao Yin
AU - Guo, Bin
N1 - Publisher Copyright:
© 2024 Science Press. All rights reserved.
PY - 2024/5
Y1 - 2024/5
N2 - Developing Artificial Intelligence of Things (AIoT) technology and boosting the construction of a ubiquitous computing digital infrastructure system are important directions. In order to overcome the privacy issues brought by cloud computing and meet the needs of low-latency applications, deploying deep models on ubiquitous intelligent IoT terminals to provide intelligent applications/services has attracted more and more attention. But the terminal deployment of the deep model has many challenges. Limited by the available resources of the terminal hardware platform, researchers start with model compression technology and hardware accelerators to provide technical support for the lightweight and high-quality deployment of deep models. However, the video application based on the deep model will inevitably face the problem of data drift in the actual mobile scenes. Moreover, this problem is especially noticeable in mobile scenes and devices because of more severe distribution fluctuations and sparser network structures. Under the influence of data drift, the accuracy of the deep model will decrease significantly, making it difficult to meet the performance requirements. Edge-assisted model online evolution is an effective way to solve the problem of data drift, which can realize an intelligent computing system that can evolve and grow. Previous model evolution systems only focus on improving the accuracy of the terminal model. But in multi-terminal system, the global model is also affected by data drift due to the ore complex and varied scenario data from different terminals, resulting in a decrease in accuracy gain in the system. In order to provide stable and reliable knowledge transfer to the terminal models, it is necessary to use federated learning to evolve the global model. However, traditional federated learning will face multiple challenges of terminal model heterogeneity, and data distribution heterogeneity in multi-model evolution systems. What's more, the speed of online evolution will affect the time proportion of high-accuracy services in the terminal models, decreasing their life-cycle performance. In order to collaboratively improve the accuracy and speed of model evolution for multiple terminal models, the paper proposes a method and system for the co-evolution of multi-terminal video stream intelligent recognition models based on the concept of software and hardware integration. On the one hand, we develop a novel multi-terminal mutual learning and co-evolutionary evolution method, which overcomes the challenge of model data heterogeneity with the help of new terminal scene data, and realizes high accuracy gain co-evolution and co-evolutionary learning of multi-terminal models and global models. On the other hand, combined with the characteristics of mutual learning algorithms, a training acceleration method based on in-memory computing is proposed, which uses adaptive data compression and model training optimization to improve hardware performance, and accelerates the evolution speed of multiple terminal models while ensuring the evolution accuracy gain. Finally, through the experimental verification of the continuous evolution task of the lightweight model in different real mobile scenarios, and comparing six benchmark methods, it is proved that NestEvo can effectively reduce the evolution delay by 51. 98% and improve the average inference accuracy of the lightweight model of the terminal by 42.6%.
AB - Developing Artificial Intelligence of Things (AIoT) technology and boosting the construction of a ubiquitous computing digital infrastructure system are important directions. In order to overcome the privacy issues brought by cloud computing and meet the needs of low-latency applications, deploying deep models on ubiquitous intelligent IoT terminals to provide intelligent applications/services has attracted more and more attention. But the terminal deployment of the deep model has many challenges. Limited by the available resources of the terminal hardware platform, researchers start with model compression technology and hardware accelerators to provide technical support for the lightweight and high-quality deployment of deep models. However, the video application based on the deep model will inevitably face the problem of data drift in the actual mobile scenes. Moreover, this problem is especially noticeable in mobile scenes and devices because of more severe distribution fluctuations and sparser network structures. Under the influence of data drift, the accuracy of the deep model will decrease significantly, making it difficult to meet the performance requirements. Edge-assisted model online evolution is an effective way to solve the problem of data drift, which can realize an intelligent computing system that can evolve and grow. Previous model evolution systems only focus on improving the accuracy of the terminal model. But in multi-terminal system, the global model is also affected by data drift due to the ore complex and varied scenario data from different terminals, resulting in a decrease in accuracy gain in the system. In order to provide stable and reliable knowledge transfer to the terminal models, it is necessary to use federated learning to evolve the global model. However, traditional federated learning will face multiple challenges of terminal model heterogeneity, and data distribution heterogeneity in multi-model evolution systems. What's more, the speed of online evolution will affect the time proportion of high-accuracy services in the terminal models, decreasing their life-cycle performance. In order to collaboratively improve the accuracy and speed of model evolution for multiple terminal models, the paper proposes a method and system for the co-evolution of multi-terminal video stream intelligent recognition models based on the concept of software and hardware integration. On the one hand, we develop a novel multi-terminal mutual learning and co-evolutionary evolution method, which overcomes the challenge of model data heterogeneity with the help of new terminal scene data, and realizes high accuracy gain co-evolution and co-evolutionary learning of multi-terminal models and global models. On the other hand, combined with the characteristics of mutual learning algorithms, a training acceleration method based on in-memory computing is proposed, which uses adaptive data compression and model training optimization to improve hardware performance, and accelerates the evolution speed of multiple terminal models while ensuring the evolution accuracy gain. Finally, through the experimental verification of the continuous evolution task of the lightweight model in different real mobile scenarios, and comparing six benchmark methods, it is proved that NestEvo can effectively reduce the evolution delay by 51. 98% and improve the average inference accuracy of the lightweight model of the terminal by 42.6%.
KW - Artificial Intelligence of Things
KW - data drift
KW - in-memory computing
KW - model evolution
KW - mutual learning
KW - training acceleration scheme
UR - http://www.scopus.com/inward/record.url?scp=85196727969&partnerID=8YFLogxK
U2 - 10.11897/SP.J.1016.2024.00947
DO - 10.11897/SP.J.1016.2024.00947
M3 - 文章
AN - SCOPUS:85196727969
SN - 0254-4164
VL - 47
SP - 947
EP - 970
JO - Jisuanji Xuebao/Chinese Journal of Computers
JF - Jisuanji Xuebao/Chinese Journal of Computers
IS - 5
ER -