[1]ZHANG Mingquan,JIA Yuanyuan,ZHANG Ronghua.Hybrid knowledge distillation-assisted heterogeneous federated class incremental learning for digital twins[J].CAAI Transactions on Intelligent Systems,2025,20(4):905-915.[doi:10.11992/tis.202406027]
Copy

Hybrid knowledge distillation-assisted heterogeneous federated class incremental learning for digital twins

References:
[1] 张红艳, 张玉, 曹灿明. 一种解决数据异构问题的联邦学习方法[J]. 计算机应用研究, 2024, 41(3): 713-720.
ZHANG Hongyan, ZHANG Yu, CAO Canming. Effective method to solve problem of data heterogeneity in federated learning[J]. Application research of computers, 2024, 41(3): 713-720.
[2] 徐奕成, 戴超凡, 马武彬, 等. 基于粒子群优化的面向数据异构的联邦学习方法[J]. 计算机科学, 2024, 51(6): 391-398.
XU Yicheng, DAI Chaofan, MA Wubin, et al. Particle swarm optimization-based federated learning method for heterogeneous data[J]. Computer science, 2024, 51(6): 391-398.
[3] 王健宗, 张旭龙, 姜桂林, 等. 基于分层联邦框架的音频模型生成技术研究[J]. 智能系统学报, 2024, 19(5): 1331-1339.
WANG Jianzong, ZHANG Xulong, JIANG Guilin, et al. Research on audio model generation technology based on a hierarchical federated framework[J]. CAAI transactions on intelligent systems, 2024, 19(5): 1331-1339.
[4] 窦勇敢, 袁晓彤. 基于隐式随机梯度下降优化的联邦学习[J]. 智能系统学报, 2022, 17(3): 488-495.
DOU Yonggan, YUAN Xiaotong. Federated learning with implicit stochastic gradient descent optimization[J]. CAAI transactions on intelligent systems, 2022, 17(3): 488-495.
[5] ZHOU Zhihua. Open-environment machine learning[J]. National science review, 2022, 9(8): nwac123.
[6] LI Tian, SAHU A K, ZAHEER M, et al. Federated optimization in heterogeneous networks[EB/OL]. (2018-12-14)[2024-06-18]. https://arxiv.org/abs/1812.06127.
[7] KARIMIREDDY S P, KALE S, MOHRI M, et al. SCAFFOLD: stochastic controlled averaging for federated learning[C]//Proceedings of the 37th International Conference on Machine Learning. New York: PMLR, 2020: 5132–5143.
[8] MCMAHAN H B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[EB/OL]. (2016-05-1)[2024-06-18]. https://arxiv.org/abs/1602.05629v4.
[9] DONG Jianhua, WANG Lixu, FANG Zhen, et al. Federated class-incremental learning[EB/OL]. (2022-06-30)[2024-06-18]. https://github.com/conditionWang/FCIL/blob/main/README.md.
[10] DONG Jiahua, LI Hongliu, CONG Yang, et al. No one left behind: real-world federated class-incremental learning[J]. IEEE transactions on pattern analysis and machine intelligence, 2024, 46(4): 2054-2070.
[11] WANG Xiucheng, CHENG Nan, MA Longfei, et al. Digital twin-assisted knowledge distillation framework for heterogeneous federated learning[J]. China communications, 2023, 20(2): 61-78.
[12] PANG Junjie, HUANG Yan, XIE Zhenzhen, et al. Collaborative city digital twin for the COVID-19 pandemic: a federated learning solution[J]. Tsinghua science and technology, 2021, 26(5): 759-771.
[13] ZHAO Yunming, LI Li, LIU Ying, et al. Communication-efficient federated learning for digital twin systems of industrial Internet of Things[J]. IFAC-PapersOnLine, 2022, 55(2): 433-438.
[14] LEE G, JEONG M, SHIN Y, et al. Preservation of the global knowledge by not-true distillation in federated learning[EB/OL]. (2021-06-06)[2024-06-18]. https://arxiv.org/abs/2106.03097v5.
[15] CHEN Hongyou CHAO Weilun. Fedbe: making bayesian model ensemble applicable to federated learning[EB/OL]. (2020-09-04)[2024-06-18]. https://arxiv.org/abs/2009.01974.
[16] ZHANG Lin, SHEN Li, DING Liang, et al. Fine-tuning global model via data-free knowledge distillation for non-IID federated learning[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 10164-10173.
[17] KIRKPATRICK J, PASCANU R, RABINOWITZ N, et al. Overcoming catastrophic forgetting in neural networks[J]. Proceedings of the national academy of science, 2017, 114(13): 3521-3526.
[18] ZENG Guanxiong, CHEN Yang, CUI Bo, et al. Continual learning of context-dependent processing in neural networks[J]. Nature machine intelligence, 2019, 1: 364-372.
[19] FARAJTABAR M, AZIZAN N, MOTT A, et al. Orthogonal gradient descent for continual learning[C]//Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. New York: PMLR, 2020: 3762–3773.
[20] LI Zhizhong, HOIEM D. Learning without forgetting[J]. IEEE transactions on pattern analysis and machine intelligence, 2018, 40(12): 2935-2947.
[21] DHAR P, SINGH R V, PENG Kuanchuan, et al. Learning without memorizing[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 5133-5141.
[22] DOUILLARD A, CORD M, OLLION C, et al. PODNet: pooled outputs distillation for small-tasks incremental learning[C]//Computer Vision-ECCV 2020. Cham: Springer International Publishing, 2020: 86-102.
[23] REBUFFI S A, KOLESNIKOV A, SPERL G, et al. iCaRL: incremental classifier and representation learning[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 5533-5542.
[24] WU Yue, CHEN Yinpeng, WANG Lijuan, et al. Large scale incremental learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 374-382.
[25] MALLYA A, LAZEBNIK S. PackNet: adding multiple tasks to a single network by iterative pruning[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7765-7773.
[26] RUSU A A, RABINOWITZ N C, DESJARDINS G, et al. Progressive neural networks[EB/OL]. (2016-06-15)[2024-06-18]. https://arxiv.org/abs/1606.04671v4.
[27] 程虎威. 面向边缘计算资源受限场景的增量深度学习[D]. 北京: 北京交通大学, 2022.
CHENG Huwei. Incremental deep learning for marginal computing resource-constrained scenarios[D]. Beijing: Beijing Jiaotong University, 2022.
[28] SHOHAM N, AVIDOR T, KEREN A, et al. Overcoming forgetting in federated learning on Non-IID data[C]//2019 Workshop on Federated Learning for Data Privacy and Confidentiality. Vancouver: NeurIPS, 2019.
[29] MORI J, TERANISHI I, FURUKAWA R. Continual horizontal federated learning for heterogeneous data[C]//2022 International Joint Conference on Neural Networks. Padua: IEEE, 2022: 1-8.
[30] USMANOVA A, PORTET F, LALANDA P, et al. A distillation-based approach integrating continual learning and federated learning for pervasive services[EB/OL]. (2021-09-09)[2024-06-18]. https://arxiv.org/abs/2109.04197v1.
[31] YOON J, JEONG W, LEE G, et al. Federated continual learning with weighted inter-client transfer[C]//International Conference on Machine Learning. New York: PMLR, 2021: 12073-12086.
[32] SELVARAJU R R, COGSWELL M, DAS A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[C]//2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 618-626.
[33] ZAGORUYKO S, KOMODAKIS N. Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer[J]. (2016-12-12)[2024-06-18]. https://arxiv.org/abs/1612.03928.
[34] KRIZHEVSKY A, HINTON G. Learning multiple layers of features from tiny images[EB/OL]. (2012-05-18)[2024-06-18]. https://www.researchgate.net/publication/265748773_Learning_Multiple_Layers_of_Features_from_Tiny_Images.
[35] YANG Jiancheng, SHI Rui, WEI Donglai, et al. MedMNIST v2-A large-scale lightweight benchmark for 2D and 3D biomedical image classification[J]. Scientific data, 2023, 10(1): 41.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems