[1]赵玉新,杜登辉,成小会,等.基于强化学习的海洋移动观测网络观测路径规划方法[J].智能系统学报,2022,17(1):192-200.[doi:10.11992/tis.202106004]
 ZHAO Yuxin,DU Denghui,CHENG Xiaohui,et al.Path planning for mobile ocean observation network based on reinforcement learning[J].CAAI Transactions on Intelligent Systems,2022,17(1):192-200.[doi:10.11992/tis.202106004]
点击复制

基于强化学习的海洋移动观测网络观测路径规划方法

参考文献/References:
[1] 王建友.习近平建设海洋强国战略探析[J]. 辽宁师范大学学报(社会科学版), 2019, 42(5): 103–112.
WANG Jianyou. Analysis on xi jinping’s strategy of building a maritime power [J]. Journal of Liaoning Normal University (social science edition), 2019, 42(5) : 103–112.
[2] 尹路,李延斌,马金钢.海洋观测技术现状综述[J].舰船电子工程, 2013, 33(11): 4–7, 13.
YIN Lu, LI Yanbin, MA Jingang. A review of ocean observation technology [J]. Ship electronic engineering, 2013, 33(11): 4–7, 13.
[3] 张燕武.自适应海洋观测[J].地球科学进展, 2013, 28(5): 537–541.
ZHANG Yanwu. Adaptive ocean observation [J]. Advances in earth science,2013,28(5):537–541.
[4] 李颖虹, 王凡, 任小波. 海洋观测能力建设的现状、趋势与对策思考[J]. 地球科学进展, 2010, 25(7): 715–722.
LI Yinghong, WANG Fan, REN Xiaobo. Current situation, trend and countermeasures of ocean observation capacity construction[J]. Advances in earth science, 2010, 25(7): 715–722.
[5] 王毅然, 经小川, 贾福凯, 等. 基于多智能体协同强化学习的多目标追踪方法[J]. 计算机工程, 2020, 46(11): 90–96.
WANG Yiran, JING Xiaochuan, JIA Fukai et al. Multiobjective tracking method based on multi-agent cooperative reinforcement learning[J]. Computer engineering,2020, 46(11):90–96.
[6] 韩向敏,鲍泓,梁军, 等.一种基于深度强化学习的自适应巡航控制算法[J]. 计算机工程, 2018, 44(7):32–35.
HAN Xiangmin, BAO Hong, LIANG Jun, et al. An adaptive cruise control algorithm based on deep reinforcement learning [J]. Computer engineering, 2018, 44(7):32–35.
[7] YAN C, XIANG X, WANG C. Towards real-time path planning through deep reinforcement learning for a UAV in dynamic environments[J]. Journal of intelligent & robotic systems, 2020, 98(2): 297–309.
[8] WEN S, ZHAO Y, YUAN X, et al. Path planning for active SLAM based on deep reinforcement learning under unknown environments[J]. Intelligent service robotics, 2020, 13(2): 263–272.
[9] YAO Q, ZHENG Z, QI L, et al. Path planning method with improved artificial potential field—A reinforcement learning perspective[J]. IEEE access, 2020, 8: 135513–135523.
[10] LI B, WU Y. Path planning for UAV ground target tracking via deep reinforcement learning[J]. IEEE access, 2020, 8: 29064–29074.
[11] JIANG J, ZENG X, GUZZETTI D, et al. Path planning for asteroid hopping rovers with pre-trained deep reinforcement learning architectures[J]. Acta astronautica, 2020, 171: 265–279.
[12] WANG B, LIU Z, LI Q, et al. Mobile robot path planning in dynamic environments through globally guided reinforcement learning[J]. IEEE robotics and automation letters, 2020, 5(4): 6932–6939.
[13] WEI Y, ZHENG R. Informative path planning for mobile sensing with reinforcement learning[C]//IEEE INFOCOM 2020-IEEE Conference on Computer Communications. Beijing, China, 2020: 864–873.
[14] JOSEF S, DEGANI A. Deep reinforcement learning for safe local planning of a ground vehicle in unknown rough terrain[J]. IEEE robotics and automation letters, 2020, 5(4): 6748–6755.
[15] 杜威. 多智能体强化学习研究[D]. 徐州:中国矿业大学, 2020.
DU Wei. Research on multi-agent reinforcement learning [D]. Xuzhou: China University of Mining and Technology, 2020.
[16] 卜祥津. 基于深度强化学习的未知环境下机器人路径规划的研究[D]. 哈尔滨:哈尔滨工业大学, 2018.
BU Xiangjin. Research on robot path planning in unknown environment based on deep reinforcement learning [D]. Harbin: Harbin Institute of Technology, 2018.
[17] 向卉. 基于深度强化学习的室内目标路径规划研究[D]. 桂林:桂林电子科技大学, 2019.
XIANG Hui. Research on robot path planing under unknown environment based on deep reforcement learning [D]. Guilin: Guilin University of Electronic Technology, 2019.
[18] 姚君延. 基于深度增强学习的路径规划算法研究[D]. 成都:电子科技大学, 2018.
YAO Junyan. Research of path planning algorithms based on deep reinforcement learning [D]. Chengdu: University of Electronic Science and Technology of China, 2018.
[19] 李艳庆. 基于遗传算法和深度强化学习的多无人机协同区域监视的航路规划[D]. 西安:西安电子科技大学, 2018.
LI Yanqing. Route planning for multi-UAV cooperative area surveillance based on genetic algorithm and deep reinforcement learning [D]. Xi’an: Xidian University, 2018.
[20] 邓悟. 基于深度强化学习的智能体避障与路径规划研究与应用[D]. 成都:电子科技大学, 2019.
DENG Wu. Research and application of obstacle avoidance and path planning for agents based on deep reinforcement learning [D]. Chengdu: University of Electronic Science and Technology of China, 2019.
[21] 王毅然, 经小川, 田涛,等. 基于强化学习的多Agent路径规划方法研究[J]. 计算机应用与软件, 2019, 36(8):7.
WANG Yiran, JING Xiaochuan, TIAN Tao, et al. Research on multi-agent path planning method based on reinforcement learning [J]. Computer applications and software, 2019, 36(8):7.
[22] 林桢祥. 基于深度增强学习的图像语句描述生成研究[D].长沙:国防科技大学, 2017.
LIN Zhenxiang. Research on image description generation based on deep reinforcement learning [D]. Changsha: National University of Defense Technology,2017.
[23] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Playing atari with deep reinforcement learning[J]. Computer science, 2013, 2: 45–66.
[24] 郭宪. 基于DQN的机械臂控制策略的研究[D]. 北京:北京交通大学, 2018.
GUO Xian. Research on control strategy of manipulator based on DQN [D]. Beijing: Beijing Jiaotong University, 2018.
[25] 李季. 基于深度强化学习的移动边缘计算中的计算卸载与资源分配算法研究与实现[D].北京:北京邮电大学, 2019.
LI Ji. Research and implementation of computing unloading and resource allocation algorithm in moving edge computing based on deep reinforcement learning [D]. Beijing: Beijing University of Posts and Telecommunications, 2019.
相似文献/References:
[1]周文吉,俞扬.分层强化学习综述[J].智能系统学报,2017,12(5):590.[doi:10.11992/tis.201706031]
 ZHOU Wenji,YU Yang.Summarize of hierarchical reinforcement learning[J].CAAI Transactions on Intelligent Systems,2017,12():590.[doi:10.11992/tis.201706031]
[2]王作为,徐征,张汝波,等.记忆神经网络在机器人导航领域的应用与研究进展[J].智能系统学报,2020,15(5):835.[doi:10.11992/tis.202002020]
 WANG Zuowei,XU Zheng,ZHANG Rubo,et al.Research progress and application of memory neural network in robot navigation[J].CAAI Transactions on Intelligent Systems,2020,15():835.[doi:10.11992/tis.202002020]
[3]杨瑞,严江鹏,李秀.强化学习稀疏奖励算法研究——理论与实验[J].智能系统学报,2020,15(5):888.[doi:10.11992/tis.202003031]
 YANG Rui,YAN Jiangpeng,LI Xiu.Survey of sparse reward algorithms in reinforcement learning — theory and experiment[J].CAAI Transactions on Intelligent Systems,2020,15():888.[doi:10.11992/tis.202003031]
[4]欧阳勇平,魏长赟,蔡帛良.动态环境下分布式异构多机器人避障方法研究[J].智能系统学报,2022,17(4):752.[doi:10.11992/tis.202106044]
 OUYANG Yongping,WEI Changyun,CAI Boliang.Collision avoidance approach for distributed heterogeneous multirobot systems in dynamic environments[J].CAAI Transactions on Intelligent Systems,2022,17():752.[doi:10.11992/tis.202106044]
[5]王竣禾,姜勇.基于深度强化学习的动态装配算法[J].智能系统学报,2023,18(1):2.[doi:10.11992/tis.202201006]
 WANG Junhe,JIANG Yong.Dynamic assembly algorithm based on deep reinforcement learning[J].CAAI Transactions on Intelligent Systems,2023,18():2.[doi:10.11992/tis.202201006]
[6]陶鑫钰,王艳,纪志成.基于深度强化学习的节能工艺路线发现方法[J].智能系统学报,2023,18(1):23.[doi:10.11992/tis.202112030]
 TAO Xinyu,WANG Yan,JI Zhicheng.Energy-saving process route discovery method based on deep reinforcement learning[J].CAAI Transactions on Intelligent Systems,2023,18():23.[doi:10.11992/tis.202112030]
[7]张钰欣,赵恩娇,赵玉新.规则耦合下的多异构子网络MADDPG博弈对抗算法[J].智能系统学报,2024,19(1):190.[doi:10.11992/tis.202303037]
 ZHANG Yuxin,ZHAO Enjiao,ZHAO Yuxin.MADDPG game confrontation algorithm of polyisomer network based on rule coupling based on rule coupling[J].CAAI Transactions on Intelligent Systems,2024,19():190.[doi:10.11992/tis.202303037]

备注/Memo

收稿日期:2021-06-02。
基金项目:国家自然科学基金项目(41676088);中央高校基本科研业务费项目(3072021CFJ0401).
作者简介:赵玉新,教授,博士生导师,工业和信息化部高技术船舶通信导航与智能系统专业组秘书长、中国航海学会理事、中国运筹学会决策科学分会常务理事 、IET(英国工程技术学会)Fellow、IEEE高级会员,主要研究方向为水下导航技术及应用、业务化海洋学、智能航海技术。主持国防 973 课题、国家重大专项课题、国家自然科学基金等项目。发表学术论文 100 余篇,出版学术著作 4 部;杜登辉,硕士研究生,主要研究方向为强化学习算法、海洋观测网;刘延龙,博士研究生,主要研究方向为智能算法、业务化海洋学、海洋观测网
通讯作者:刘延龙. E-mail:yanlong_liu@hrbeu.edu.cn

更新日期/Last Update: 1900-01-01
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com