[1]窦勇敢,袁晓彤.基于隐式随机梯度下降优化的联邦学习[J].智能系统学报,2022,17(3):488-495.[doi:10.11992/tis.202106029]
 DOU Yonggan,YUAN Xiaotong.Federated learning with implicit stochastic gradient descent optimization[J].CAAI Transactions on Intelligent Systems,2022,17(3):488-495.[doi:10.11992/tis.202106029]
点击复制

基于隐式随机梯度下降优化的联邦学习

参考文献/References:
[1] MCMAHAN B, MOORE E, RAMAGE D, et al. Communication-efficient learning of deep networks from decentralized data[C]//Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Fort Lauderdale, USA, 2017: 1273–1282.
[2] WANG Hongyi, YUROCHKIN M, SUN Yuekai, et al. Federated learning with matched averaging [EB/OL]. (2020–02–25)[2021–03–09]https://arxiv: 2002.06440, 2020.
[3] KOPPARAPU K, LIN E, ZHAO J. FedCD: Improving performance in non-IID federated learning [EB/OL]. (2020–07–27) [2021–03–09]https:// arxiv: 2006.09637, 2020.
[4] YU Hao, YANG Sen, ZHU Shenghuo. Parallel restarted SGD with faster convergence and less communication: Demystifying why model averaging works for deep learning[C]//Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. Palo Alto, USA, 2019: 5693-5700.
[5] WANG Shiqiang, TUOR T, SALONIDIS T, et al. Adaptive federated learning in resource constrained edge computing systems[J]. IEEE journal on selected areas in communications, 2019, 37(6): 1205–1221.
[6] YU Hao, JIN Rong, YANG Sen. On the linear speedup analysis of communication efficient momentum SGD for distributed non-convex optimization[C]//Proceedings of the 36th International Conference on Machine Learning. Long Beach, USA, 2019: 7184–7193.
[7] JEONG E, OH S, KIM H, et al. Communication-efficient on-device machine learning: federated distillation and augmentation under Non-IID private data [EB/OL]. (2018–11–28)[2021–03–09]https:// arxiv: 1811.11479, 2018.
[8] HUANG Li, YIN Yifeng, FU Zeng, et al. LoAdaBoost: loss-based AdaBoost federated machine learning with reduced computational complexity on IID and non-IID intensive care data[J]. PLoS one, 2020, 15(4): e0230706.
[9] REDDI S, CHARLES Z, ZAHEER M, et al. Adaptive federated optimization [EB/OL]. (2021–09–08) [2021–10–09]https:// arXiv: 2003.00295, 2021.
[10] YANG Kai, FAN Tao, CHEN Tianjian, et al. A quasi-newton method based vertical federated learning framework for logistic regression[EB/OL]. (2019–12–04)[2021–09–08]https:// arXiv: 1912.00513, 2019.
[11] DHAKAL S, PRAKASH S, YONA Y, et al. Coded federated learning[C]//2019 IEEE Globecom Workshops (GC Wkshps). Waikoloa, USA, 2019: 1–6.
[12] WANG Cong, YANG Yuanyuan, ZHOU Pengzhan. Towards efficient scheduling of federated mobile devices under computational and statistical heterogeneity[J]. IEEE transactions on parallel and distributed systems, 2021, 32(2): 394–410.
[13] MALINOVSKIY G, KOVALEV D, GASANOV E, et al. From local SGD to local fixed-point methods for federated learning[C]//Proceedings of the 37th International Conference on Machine Learning. New York, USA, 2020: 6692–6701.
[14] HANZELY F, RICHTáRIK P. Federated learning of a mixture of global and local models [EB/OL]. (2020–02–10)[2021–03–09]https:// arXiv: 2002.05516, 2020.
[15] ROTHCHILD D, PANDA A, ULLAH E, et al. FetchSGD: Communication-efficient federated learning with sketching[C]//Proceedings of the 37th International Conference on Machine Learning. New York, USA, 2020: 8253–8265.
[16] WANG Jialei, WANG Weiran, SREBRO N. Memory and communication efficient distributed stochastic optimization with minibatch-prox[C]//Proceedings of the 2017 Conference on Learning Theory. New York, USA, 2017: 1882–1919.
[17] LI Tian, HU Shengyuan, BEIRAMI A, et al. Federated multi-task learning for competing constraints[EB/OL]. [2021–03–09]https://openreview.net/forum?id=1ZN5y4yx6T1.
[18] LI Tian, SAHU A, ZAHEER M, et al. Federated optimization in heterogeneous networks[J]. Proceeding of machine learning and systems, 2020, 2: 429–450.
[19] ZHOU Pan, YUAN Xiaotong, XU Huan, et al. Efficient meta learning via minibatch proximal update[EB/OL]. (2019–12–08)[2021–03–09]https://openreview.net/forum?id=B1gSHVrx8S.
[20] PHONG L T, AONO Y, HAYASHI T, et al. Privacy-preserving deep learning via additively homomorphic encryption[J]. IEEE transactions on information forensics and security, 2018, 13(5): 1333–1345.
[21] GO A, BHAYANI R, HUANG Lei. Twitter sentiment classification using distant supervision[J]. CS224N project report, Stanford, 2009, 1(12): 2009.
[22] LECUN Y, BOTTOU L, BENGIO Y, et al. Gradient-based learning applied to document recognition[J]. Proceedings of the IEEE, 1998, 86(11): 2278–2324.
[23] COHEN G, AFSHAR S, TAPSON J, et al. EMNIST: extending MNIST to handwritten letters[C]//2017 International Joint Conference on Neural Networks (IJCNN). Anchorage, USA, 2017: 2921–2926.
[24] Tung K K. Topics in Mathematical Modeling[M]. Princeton University Press, 2007.
[25] BALLES, LUKAS, PHILIPP HENNING. Dissecting adam: the sign, magnitude and variance of stochastic gradients[C]//International Conference on Machine Learning. PMLR, 2018: 404–413.
相似文献/References:
[1]王健宗,肖京,朱星华,等.联邦推荐系统的协同过滤冷启动解决方法[J].智能系统学报,2021,16(1):178.[doi:10.11992/tis.202009032]
 WANG Jianzong,XIAO Jing,ZHU Xinghua,et al.Cold starts in collaborative filtering for federated recommender systems[J].CAAI Transactions on Intelligent Systems,2021,16():178.[doi:10.11992/tis.202009032]

备注/Memo

收稿日期:2021-06-18。
基金项目:国家自然科学基金项目(61876090,61936005);科技创新2030–“新一代人工智能”重大项目(2018AAA0100400).
作者简介:窦勇敢,硕士研究生,主要研究方向为联邦学习、语义分割;袁晓彤,教授,博士生导师,中国计算机学会计算机视觉专委会委员,中国自动化学会模式识别与机器智能专委会委员,IEEE会员,主要研究方向为机器学习和计算机视觉。入选江苏省双创人才。发表学术论文80余篇
通讯作者:袁晓彤.E-mail:xtyuan1980@gmail.com

更新日期/Last Update: 1900-01-01
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com