TY - GEN
T1 - Poster
T2 - 43rd IEEE International Conference on Distributed Computing Systems, ICDCS 2023
AU - Zhang, Bo
AU - Huang, Shuo
AU - Cui, Helei
AU - Liu, Xiaoning
AU - Yu, Zhiwen
AU - Guo, Bin
AU - Xing, Tao
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The rapid evolution of public knowledge is the trend of the present era; rendering previously collected data susceptible to obsolescence. The continuously generated new knowledge could further affect the performance of the model trained with previous data, such a phenomenon is called temporal misalignment. A vanilla mitigation approach is to periodically update the model in a centralized learning scheme. However, in a decentralized learning framework like Federated Learning (FL), such a patch requires clients to upload the data, which contradicts FL's intention to protect clients' privacy. Furthermore, considering the stationary defenses in FL, new knowledge could be misjudged and rejected as malicious attacks, which hinders the further update of the model. Yet dynamically adapting defenses requires meticulous fine-tuning and harms the scalability. Thus in this poster, we raise such practical concern and discuss it in the context of FL. We then build a prototype of a GPT2-based FL framework and conduct experiments to demonstrate our perspective. The performance in new knowledge drops by 33.47% compared with the previous data, which justify the FL with defenses strategy can misjudge the new knowledge.
AB - The rapid evolution of public knowledge is the trend of the present era; rendering previously collected data susceptible to obsolescence. The continuously generated new knowledge could further affect the performance of the model trained with previous data, such a phenomenon is called temporal misalignment. A vanilla mitigation approach is to periodically update the model in a centralized learning scheme. However, in a decentralized learning framework like Federated Learning (FL), such a patch requires clients to upload the data, which contradicts FL's intention to protect clients' privacy. Furthermore, considering the stationary defenses in FL, new knowledge could be misjudged and rejected as malicious attacks, which hinders the further update of the model. Yet dynamically adapting defenses requires meticulous fine-tuning and harms the scalability. Thus in this poster, we raise such practical concern and discuss it in the context of FL. We then build a prototype of a GPT2-based FL framework and conduct experiments to demonstrate our perspective. The performance in new knowledge drops by 33.47% compared with the previous data, which justify the FL with defenses strategy can misjudge the new knowledge.
KW - Federated Learning
KW - Secure Aggregation
KW - Temporal Misalignment
UR - http://www.scopus.com/inward/record.url?scp=85175061340&partnerID=8YFLogxK
U2 - 10.1109/ICDCS57875.2023.00131
DO - 10.1109/ICDCS57875.2023.00131
M3 - 会议稿件
AN - SCOPUS:85175061340
T3 - Proceedings - International Conference on Distributed Computing Systems
SP - 1063
EP - 1064
BT - Proceedings - 2023 IEEE 43rd International Conference on Distributed Computing Systems, ICDCS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 18 July 2023 through 21 July 2023
ER -