[1]ZHANG Mingquan,JIA Yuanyuan,ZHANG Ronghua.Hybrid knowledge distillation-assisted heterogeneous federated class incremental learning for digital twins[J].CAAI Transactions on Intelligent Systems,2025,20(4):905-915.[doi:10.11992/tis.202406027]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
20
Number of periods:
2025 4
Page number:
905-915
Column:
学术论文—机器学习
Public date:
2025-08-05
- Title:
-
Hybrid knowledge distillation-assisted heterogeneous federated class incremental learning for digital twins
- Author(s):
-
ZHANG Mingquan1; 2; JIA Yuanyuan1; ZHANG Ronghua1; 3
-
1. Department of Computer, North China Electric Power University, Baoding 071003, China;
2. Hebei Key Laboratory of Knowledge Computing for Energy & Power, North China Electric Power University, Baoding 071003, China;
3. Engineering Research Center of Intelligent Computing for Complex Energy Systems, Ministry of Education, North China Electric Power University, Baoding 071003, China
-
- Keywords:
-
digital twin; federated class incremental learning; hybrid knowledge distillation; data heterogeneity; image classification; catastrophic forgetting; CT images; federated learning
- CLC:
-
TP399
- DOI:
-
10.11992/tis.202406027
- Abstract:
-
In the context of digital twins, federated learning faces the challenge of identically nonindependent distribution data and dynamic changes of classes, which can be explained as the problem of data heterogeneity in the spatial and temporal scales. To solve this problem, this paper constructs an overall framework for federated class incremental learning for digital twins and proposes a federated class incremental learning method called hybrid knowledge distillation-assisted heterogeneous federated class incremental learning (FedKA). Specifically, different from the traditional federated learning approaches, FedKA employs a hybrid knowledge distillation method during the local update period. This method integrates adaptive semantic distillation loss with adaptive attention distillation loss. FedKA can distill the soft-labeled semantic knowledge in the output layer and the high-dimensional feature knowledge in the middle layer of the old global model. Consequently, the client model can effectively reduce the forgetfulness of the old data while fitting the new data and improve the performance of the federated class incremental model. Under the same data heterogeneity, the proposed FedKA method is utilized, and the accuracy on the CIFAR100 dataset remarkably increases from 1.85% to 2.56% compared with the SOTA model. Furthermore, FedKA achieves optimal or near-optimal performance on the medical CT image datasets, including OrganAMNIST, OrganCMNIST, and OrganSMNIST.