[1]XU Peng,XIE Guangming,WEN Jiayan,et al.Event-triggered reinforcement learning formation control for multi-agent[J].CAAI Transactions on Intelligent Systems,2019,14(1):93-98.[doi:10.11992/tis.201807010]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
14
Number of periods:
2019 1
Page number:
93-98
Column:
学术论文—机器学习
Public date:
2019-01-05
- Title:
-
Event-triggered reinforcement learning formation control for multi-agent
- Author(s):
-
XU Peng1; XIE Guangming1; 2; 3; WEN Jiayan1; 2; GAO Yuan1
-
1. School of Electric and Information Engineering, Guangxi University of Science and Technology, Liuzhou 545006, China;
2. College of Engineering, Peking University, Beijing 100871, China;
3. Institute of Ocean Research, Peking University, Beijing 100871, China
-
- Keywords:
-
reinforcement learning; multi-agent; event-triggered; formation control; Markov decision processes; swarm intelligence; action-decisions; particle swarm optimization
- CLC:
-
TP391.8
- DOI:
-
10.11992/tis.201807010
- Abstract:
-
A large consumption of communication and computing capabilities has been reported in classical reinforcement learning of multi-agent formation. This paper introduces an event-triggered mechanism so that the multi-agent’s decisions do not need to be carried out periodically; instead, the multi-agent’s actions are replaced depending on the event-triggered condition. Both the sum of total reward and variance in current rewards are considered when designing an event-triggered condition, so a joint optimization strategy is obtained by exchanging information among multiple agents. Numerical simulation results demonstrate that the multi-agent formation control algorithm can effectively reduce the frequency of a multi-agent’s action decisions and consumption of resources while ensuring system performance.