[1]ZHANG Wenxu,MA Lei,HE Huilin,et al.Air-ground heterogeneous coordination for multi-agent coverage based on reinforced learning[J].CAAI Transactions on Intelligent Systems,2018,13(2):202-207.[doi:10.11992/tis.201609017]
Copy

Air-ground heterogeneous coordination for multi-agent coverage based on reinforced learning

References:
[1] KANTAROS Y, ZAVLANOS M M. Distributed communication-aware coverage control by mobile sensor networks[J]. Automatica, 2016, 63: 209-220.
[2] 蔡自兴, 崔益安. 多机器人覆盖技术研究进展[J]. 控制与决策, 2008, 23(5): 481-486, 491.
CAI Zixing, CUI Yi’an. Survey of multi-robot coverage[J]. Control and decision, 2008, 23(5): 481-486, 491.
[3] MAHBOUBI H, MOEZZI K, AGHDAM A G, et al. Distributed deployment algorithms for improved coverage in a network of wireless mobile sensors[J]. IEEE transactions on industrial informatics, 2014, 10(1): 163-174.
[4] TAO Dan, WU T Y. A survey on barrier coverage problem in directional sensor networks[J]. IEEE sensors journal, 2015, 15(2): 876-885.
[5] TIAN Yuping, ZHANG Ya. High-order consensus of heterogeneous multi-agent systems with unknown communication delays[J]. Automatica, 2012, 48(6): 1205-1212.
[6] SONG Cheng, LIU Lu, FENG Gang, et al. Coverage control for heterogeneous mobile sensor networks on a circle[J]. Automatica, 2016, 63: 349-358.
[7] KANTAROS Y, THANOU M, TZES A. Distributed coverage control for concave areas by a heterogeneous robot-swarm with visibility sensing constraints[J]. Automatica, 2015, 53: 195-207.
[8] WANG Xinbing, HAN Sihui, WU Yibo, et al. Coverage and energy consumption control in mobile heterogeneous wireless sensor networks[J]. IEEE transactions on automatic control, 2013, 58(4): 975-988.
[9] SHARIFI F, CHAMSEDDINE A, MAHBOUBI H, et al. A distributed deployment strategy for a network of cooperative autonomous vehicles[J]. IEEE transactions on control systems technology, 2015, 23(2): 737-745.
[10] CHEN Jie, ZHANG Xing, XIN Bin, et al. Coordination between unmanned aerial and ground vehicles: a taxonomy and optimization perspective[J]. IEEE transactions on cybernetics, 2016, 46(4): 959-972.
[11] ZHOU Yi, CHENG Nan, LU Ning, et al. Multi-UAV-aided networks: aerial-ground cooperative vehicular networking architecture[J]. IEEE vehicular technology magazine, 2015, 10(4): 36-44.
[12] PAPACHRISTOS C, TZES A. The power-tethered UAV-UGV team: a collaborative strategy for navigation in partially-mapped environments[C]//Proceedings of 22nd Mediterranean Conference of Control and Automation. Palermo, Italy, 2014: 1153-1158.
[13] GROCHOLSKY B, KELLER J, KUMAR V, et al. Cooperative air and ground surveillance[J]. IEEE robotics and automation magazine, 2006, 13(3): 16-25.
[14] KHALEGHI A M, XU Dong, WANG Zhenrui, et al. A DDDAMS-based planning and control framework for surveillance and crowd control via UAVs and UGVs[J]. Expert systems with applications, 2013, 40(18): 7168-7183.
[15] 马磊, 张文旭, 戴朝华. 多机器人系统强化学习研究综述[J]. 西南交通大学学报, 2014, 49(6): 1032-1044.
MA Lei, ZHANG Wenxu, DAI Chaohua. A review of developments in reinforcement learning for multi-robot systems[J]. Journal of southwest Jiaotong university, 2014, 49(6): 1032-1044.
[16] PUTERMAN M L. Markov decision processes: discrete stochastic dynamic programming[M]. New York: John Wiley and Sons, 1994.
[17] WATKINS C J C H, DAYAN P. Q-learning[J]. Machine learning, 1992, 8(3/4): 279-292.
[18] WU Feng, ZILBERSTEIN S, CHEN Xiaoping. Online planning for multi-agent systems with bounded communication[J]. Artificial intelligence, 2011, 175(2): 487-511.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems