TY - GEN
T1 - Think Twice before Selection
T2 - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
AU - Chen, Jiayi
AU - Ma, Benteng
AU - Cui, Hengfei
AU - Xia, Yong
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Federated learning facilitates the collaborative learning of a global model across multiple distributed medical in-stitutions without centralizing data. Nevertheless, the ex-pensive cost of annotation on local clients remains an ob-stacle to effectively utilizing local data. To mitigate this issue, federated active learning methods suggest leveraging local and global model predictions to select a relatively small amount of informative local data for annotation. However, existing methods mainly focus on all local data sampled from the same domain, making them un-reliable in realistic medical scenarios with domain shifts among different clients. In this paper, we make the first at-tempt to assess the informativeness of local data derived from diverse domains and propose a novel methodology termed Federated Evidential Active Learning (FEAL) to calibrate the data evaluation under domain shift. Specif-ically, we introduce a Dirichlet prior distribution in both local and global models to treat the prediction as a distribution over the probability simplex and capture both aleatoric and epistemic uncertainties by using the Dirichlet-based evidential model. Then we employ the epistemic uncer-tainty to calibrate the aleatoric uncertainty. Afterward, we design a diversity relaxation strategy to reduce data re-dundancy and maintain data diversity. Extensive experi-ments and analysis on five real multi-center medical image datasets demonstrate the superiority of FEAL over the state-of-the-art active learning methods in federated sce-narios with domain shifts. The code will be available at https://github.com/JiayiChen815/FEAL.
AB - Federated learning facilitates the collaborative learning of a global model across multiple distributed medical in-stitutions without centralizing data. Nevertheless, the ex-pensive cost of annotation on local clients remains an ob-stacle to effectively utilizing local data. To mitigate this issue, federated active learning methods suggest leveraging local and global model predictions to select a relatively small amount of informative local data for annotation. However, existing methods mainly focus on all local data sampled from the same domain, making them un-reliable in realistic medical scenarios with domain shifts among different clients. In this paper, we make the first at-tempt to assess the informativeness of local data derived from diverse domains and propose a novel methodology termed Federated Evidential Active Learning (FEAL) to calibrate the data evaluation under domain shift. Specif-ically, we introduce a Dirichlet prior distribution in both local and global models to treat the prediction as a distribution over the probability simplex and capture both aleatoric and epistemic uncertainties by using the Dirichlet-based evidential model. Then we employ the epistemic uncer-tainty to calibrate the aleatoric uncertainty. Afterward, we design a diversity relaxation strategy to reduce data re-dundancy and maintain data diversity. Extensive experi-ments and analysis on five real multi-center medical image datasets demonstrate the superiority of FEAL over the state-of-the-art active learning methods in federated sce-narios with domain shifts. The code will be available at https://github.com/JiayiChen815/FEAL.
KW - Active learning
KW - Federated learning
KW - Medical image analysis
KW - Uncertainty estimation
UR - http://www.scopus.com/inward/record.url?scp=85204923512&partnerID=8YFLogxK
U2 - 10.1109/CVPR52733.2024.01087
DO - 10.1109/CVPR52733.2024.01087
M3 - 会议稿件
AN - SCOPUS:85204923512
T3 - Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
SP - 11439
EP - 11449
BT - Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024
PB - IEEE Computer Society
Y2 - 16 June 2024 through 22 June 2024
ER -