[1]张文旭,马磊,王晓东.基于事件驱动的多智能体强化学习研究[J].智能系统学报,2017,12(1):82-87.[doi:10.11992/tis.201604008]
 ZHANG Wenxu,MA Lei,WANG Xiaodong.Reinforcement learning for event-triggered multi-agent systems[J].CAAI Transactions on Intelligent Systems,2017,12(1):82-87.[doi:10.11992/tis.201604008]
点击复制

基于事件驱动的多智能体强化学习研究

参考文献/References:
[1] ZHU Wei, JIANG ZhongPing, FENG Gang. Event-based consensus of multi-agent systems with general linear models[J]. Automatica, 2014, 50(2): 552-558.
[2] FAN Yuan, FENG Gang, WANG Yong, et al. Distributed event-triggered control of multi-agent systems with combinational measurements[J]. Automatica, 2013, 49(2): 671-675.
[3] WANG Xiaofeng, LEMMON M D. Event-triggering in distributed networked control systems[J]. IEEE transactions on automatic control, 2011, 56(3): 586-601.
[4] TABUADA P. Event-triggered real-time scheduling of stabilizing control tasks[J]. IEEE transactions on automatic control, 2007, 52(9): 1680-1685.
[5] ZOU Lei, WANG Zidong, GAO Huijun, et al. Event-triggered state estimation for complex networks with mixed time delays via sampled data information: the continuous-time case[J]. IEEE transactions on cybernetics, 2015, 45(12): 2804-2815.
[6] SAHOO A, XU Hao, JAGANNATHAN S. Adaptive neural network-based event-triggered control of single-input single-output nonlinear discrete-time systems[J]. IEEE transactions on neural networks and learning systems, 2016, 27(1): 151-164.
[7] HU Wenfeng, LIU Lu, FENG Gang. Consensus of linear multi-agent systems by distributed event-triggered strategy[J]. IEEE transactions on cybernetics, 2016, 46(1): 148-157.
[8] ZHONG Xiangnan, NI Zhen, HE Haibo, et al. Event-triggered reinforcement learning approach for unknown nonlinear continuous-time system[C]//Proceedings of 2014 International Joint Conference on Neural Networks. Beijing, China, 2014: 3677-3684.
[9] XU Hao, JAGANNATHAN S. Near optimal event-triggered control of nonlinear continuous-time systems using input and output data[C]//Proceedings of the 11th World Congress on Intelligent Control and Automation. Shenyang, China, 2014: 1799-1804.
[10] BERNSTEIN D S, GIVAN R, IMMERMAN N, et al. The complexity of decentralized control of Markov decision processes[J]. Mathematics of operations research, 2002, 27(4): 819-840.
[11] WATKINS C J C H, DAYAN P. Q-learning[J]. Machine learning, 1992, 8(3/4): 279-292.
[12] SZEPESVRI C, LITTMAN M L. A unified analysis of value-function-based reinforcement-learning algorithms[J]. Neural computation, 1999, 11(8): 2017-2060.
相似文献/References:
[1]谭树彬,刘建昌.Multi-Agent的连续轧制过程控制系统研究[J].智能系统学报,2008,3(2):150.
 TAN Shu-bin,LIU Jian-chang.Research On multiAgent based control system for continuous rolling process[J].CAAI Transactions on Intelligent Systems,2008,3(1):150.
[2]雷明,周超,周绍磊,等.考虑时变时滞的多移动智能体分布式编队控制[J].智能系统学报,2012,7(6):536.
 LEI Ming,ZHOU Chao,ZHOU Shaolei,et al.Decentralized formation control of multiple mobile agents considering timevarying delay[J].CAAI Transactions on Intelligent Systems,2012,7(1):536.
[3]郭文强,高晓光,侯勇严,等.采用MSBN多智能体协同推理的智能农业车辆环境识别[J].智能系统学报,2013,8(5):453.[doi:10.3969/j.issn.1673-4785.201210057]
 GUO Wenqiang,GAO Xiaoguang,HOU Yongyan,et al.Environment recognition of intelligent agricultural vehicles based on MSBN and multi-agent coordinative inference[J].CAAI Transactions on Intelligent Systems,2013,8(1):453.[doi:10.3969/j.issn.1673-4785.201210057]
[4]曹鹏飞,郝矿荣,丁永生.面向多机器人动态任务分配的事件驱动免疫网络算法[J].智能系统学报,2018,13(6):952.[doi:10.11992/tis.201707022]
 CAO Pengfei,HAO Kuangrong,DING Yongsheng.Event-driven immune network algorithm for multi-robot dynamic task allocation[J].CAAI Transactions on Intelligent Systems,2018,13(1):952.[doi:10.11992/tis.201707022]
[5]殷昌盛,杨若鹏,朱巍,等.多智能体分层强化学习综述[J].智能系统学报,2020,15(4):646.[doi:10.11992/tis.201909027]
 YIN Changsheng,YANG Ruopeng,ZHU Wei,et al.A survey on multi-agent hierarchical reinforcement learning[J].CAAI Transactions on Intelligent Systems,2020,15(1):646.[doi:10.11992/tis.201909027]
[6]赵玉新,杜登辉,成小会,等.基于强化学习的海洋移动观测网络观测路径规划方法[J].智能系统学报,2022,17(1):192.[doi:10.11992/tis.202106004]
 ZHAO Yuxin,DU Denghui,CHENG Xiaohui,et al.Path planning for mobile ocean observation network based on reinforcement learning[J].CAAI Transactions on Intelligent Systems,2022,17(1):192.[doi:10.11992/tis.202106004]
[7]徐鹏,谢广明,文家燕,等.事件驱动的强化学习多智能体编队控制[J].智能系统学报,2019,14(1):93.[doi:10.11992/tis.201807010]
 XU Peng,XIE Guangming,WEN Jiayan,et al.Event-triggered reinforcement learning formation control for multi-agent[J].CAAI Transactions on Intelligent Systems,2019,14(1):93.[doi:10.11992/tis.201807010]

备注/Memo

收稿日期:2016-4-5;改回日期:。
基金项目:国家自然科学基金青年项目(61304166).
作者简介:张文旭,男,1985年生,博士研究生,主要研究方向为多智能体系统、机器学习。发表论文4篇,其中被EI检索4篇;马磊,男,1972年生,教授,博士,主要研究方向为控制理论及其在机器人、新能源和轨道交通系统中的应用等。主持国内外项目14项,发表论文40余篇,其中被EI检索37篇;王晓东,男,1992年生,硕士研究生,主要研究方向为机器学习。获得国家发明型专利3项,发表论文4篇。
通讯作者:张文旭.Email:wenxu_zhang@163.com.

更新日期/Last Update: 1900-01-01
Copyright @ 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134