[1]ZHANG Xiaochuan,WANG Wanwan,PENG Lirong.A multi-chess collaborative game method for military chess game machine[J].CAAI Transactions on Intelligent Systems,2020,15(2):399-404.[doi:10.11992/tis.201812012]
Copy

A multi-chess collaborative game method for military chess game machine

References:
[1] 陶九阳, 吴琳, 胡晓峰. AlphaGo技术原理分析及人工智能军事应用展望[J]. 指挥与控制学报, 2016, 2(2): 114-120
TAO Jiuyang, WU Lin, HU Xiaofeng. Principle analysis on AlphaGo and perspective in military application of artificial intelligence[J]. Journal of command and control, 2016, 2(2): 114-120
[2] CHEN J X. The evolution of computing: AlphaGo[J]. Computing in science & engineering, 2016, 18(4): 4-7.
[3] 李翔, 姜晓红, 陈英芝, 等. 基于手牌预测的多人无限注德州扑克博弈方法[J]. 计算机学报, 2018, 41(1): 47-64
LI Xiang, JIANG Xiaohong, CHEN Yingzhi, et al. Game in multiplayer no-limit texas hold’em based on hands prediction[J]. Chinese journal of computers, 2018, 41(1): 47-64
[4] 王亚杰, 邱虹坤, 吴燕燕, 等. 计算机博弈的研究与发展[J]. 智能系统学报, 2016, 11(6): 788-798
WANG Yajie, QIU Hongkun, WU Yanyan, et al. Research and development of computer games[J]. CAAI transactions on intelligent systems, 2016, 11(6): 788-798
[5] HONG Yiguang, CHEN Guanrong, BUSHNELL L. Technical communique: distributed observers design for leader-following control of multi-agent networks[J]. Automatica, 2017, 44(3): 846-850.
[6] 滕雯娟. 基于虚拟遗憾最小化算法的德州扑克机器博弈研究[D]. 哈尔滨: 哈尔滨工业大学, 2015.
TENG Wenjuan. Research on texas poker game based on counterfactual regret minimization algorithm[D]. Harbin: Harbin Institute of Technology, 2015.
[7] VAN HASSELT H, GUEZ A, SILVER D. Deep reinforcement learning with double q-learning[C]//Proceedings of the 30th AAAI Conference on Artificial Intelligence. Arizona, USA, 2015.
[8] 史晓茹, 侯媛彬, 张涛. 不完全信息博弈的机器人对抗决策[J]. 智能系统学报, 2011, 6(2): 147-151
SHI Xiaoru, HOU Yuanbin, ZHANG Tao. The decision-making system of robots based on an incomplete information game[J]. CAAI transactions on intelligent systems, 2011, 6(2): 147-151
[9] MICHAEL L. INCZE, SCOTT R. SIDELEAU, CHRIS GAGNER, and CHARLES A. PIPPIN. (2015). Communication and collaboration of heterogeneous unmanned systems using the joint architecture for Unmanned Systems (JAUS) standards[C]//OCEANS 2015- Genova. IEEE.
[10] 乔林. 多智能体系统中的Q学习算法研究[D]. 南京: 南京邮电大学, 2012.
QIAO Lin. Study of Q-learning algorithm in multi-agent system[D]. Nanjing: Nanjing University of Posts and Telecommunications, 2012.
[11] 梁国军, 谢垂益, 胡伶俐, 等. UCT算法在不围棋博弈中的实现[J]. 韶关学院学报, 2015, 36(8): 17-21
Liang Guojun, Xie Chuiyi, Hu Lingli, et al. An implementation of UCT algorithm for nogo game[J]. Journal of Shaoguan university, 2015, 36(8): 17-21
[12] 孟坤, 王俊, 闫桐. 一种基于经验知识的军棋博弈算法设计与实现[J]. 智能计算机与应用, 2017, 7(2): 66-69
MENG Kun, WANG Jun, YAN Tong. Design and implementation of a military chess game algorithm based on knowledge experience[J]. Intelligent computer and application, 2017, 7(2): 66-69
[13] 孙英龙. 非完美信息博弈算法研究与军棋博弈系统设计与实现[D]. 沈阳: 东北大学, 2013.
SUN Yinglong. The study on imperfect information game and design and implementation of military chess system[D]. Shenyang: Northeastern University, 2013.
[14] 王学厚. 群体智能优化的计算模式和方法研究与应用[D]. 北京: 华北电力大学, 2011.
WANG Xuehou. Research on computational mode of swarm intelligent optimization and applications[D]. Beijing: North China Electric Power University, 2011.
[15] 张小川, 桑瑞婷, 周泽红, 等. 一种基于双通道卷积神经网络的短文本分类方法[J]. 重庆理工大学学报(自然科学), 2019, 33(1): 45-52
ZHANG Xiaochuan, SANG Ruiting, ZHOU Zehong, et al. A Short Text Classification Method Based on Two Channel Convolutional Neural Network[J]. Journal of Chongqing University of Technology(Natural Science), 2019, 33(1): 45-52
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems