[1]陈世同,鲁子瑜.洋流干扰下低速欠驱动AUV的三维路径规划[J].智能系统学报,2025,20(2):425-434.[doi:10.11992/tis.202311004]
 CHEN Shitong,LU Ziyu.3D path planning for low-speed underdriven AUV under ocean current disturbance[J].CAAI Transactions on Intelligent Systems,2025,20(2):425-434.[doi:10.11992/tis.202311004]
点击复制

洋流干扰下低速欠驱动AUV的三维路径规划

参考文献/References:
[1] LI Xiaohong, YU Shuanghe. Three-dimensional path planning for AUVs in ocean currents environment based on an improved compression factor particle swarm optimization algorithm[J]. Ocean engineering, 2023, 280: 114610.
[2] 李娟, 张韵, 陈涛. 改进RRT算法在未知三维环境下AUV目标搜索中的应用[J]. 智能系统学报, 2022, 17(2): 368-375.
LI Juan, ZHANG Yun, CHEN Tao. Application of the improved RRT algorithm to AUV target search in an unknown 3D environment[J]. CAAI transactions on intelligent systems, 2022, 17(2): 368-375.
[3] BI Anyuan, ZHAO Fengye, ZHANG Xiantao, et al. Combined depth control strategy for low-speed and long-range autonomous underwater vehicles[J]. Journal of marine science and engineering, 2020, 8(3): 181.
[4] 刘甲, 周伟江, 马伟建. 低速AUV航渡过程中减少洋流影响的方法[J]. 舰船科学技术, 2020, 42(3): 88-92,97.
LIU Jia, ZHOU Weijiang, MA Weijian. Adaptive decision methods to reduce effect on ocean current to low-speed AUV sailing[J]. Ship science and technology, 2020, 42(3): 88-92,97.
[5] LIU Rundong, CHEN Zonggan, WANG Zijia, et al. Intelligent path planning for AUVs in dynamic environments: an EDA-based learning fixed height histogram approach[J]. IEEE access, 2019, 7: 185433-185446.
[6] 应泽光, 何琪. 基于改进A*算法的无人艇复杂水域路径规划[J]. 机电技术, 2022, 45(5): 33-35.
YING Zeguang, HE Qi. Complex water path planning for unmanned boats based on improved A* algorithm[J]. Mechanical & electrical technology, 2022, 45(5): 33-35.
[7] WANG Yanlong, LIANG Xu, LI Baoan, et al. Research and implementation of global path planning for unmanned surface vehicle based on electronic chart[C]//International Conference on Mechatronics and Intelligent Robotics. Kunming: Springer, 2017: 534-539.
[8] LI Ye, JIANG Yanqing, MA Shan, et al. Inverse speed analysis and low speed control of underwater vehicle[J]. Journal of central south university, 2014, 21(7): 2652-2659.
[9] DOHAN K. Ocean surface currents from satellite data[J]. Journal of geophysical research (oceans), 2017, 122(4): 2647-2651.
[10] HU Siyuan, XIAO Shuai, YANG Jiachen, et al. AUV path planning considering ocean current disturbance based on cloud desktop technology[J]. Sensors, 2023, 23(17): 7510.
[11] 李慧, 赵琳, 毛英. 海况干扰下潜艇六自由度运动分析[J]. 哈尔滨工程大学学报, 2017, 38(1): 94-100.
LI Hui, ZHAO Lin, MAO Ying. Analysis of six-degree-of-freedom motion in submarines under sea disturbance[J]. Journal of Harbin engineering university, 2017, 38(1): 94-100.
[12] XING Yuan, YOUNG R, NGUYEN G, et al. Optimal path planning for wireless power transfer robot using area division deep reinforcement learning[J]. Wireless power transfer, 2022, 9(1): 9921885.
[13] 郭兴海, 计明军, 张卫丹, 等. 可变洋流环境中自主水下航行器动态路径规划的改进QPSO算法[J]. 系统工程理论与实践, 2021, 41(8): 2112-2124.
GUO Xinghai, JI Mingjun, ZHANG Weidan, et al. Improved QPSO algorithm for dynamic path planning of autonomous underwater vehicles in variable ocean current environment[J]. Systems engineering-theory & practice, 2021, 41(8): 2112-2124.
[14] KIANI F, SEYYEDABBASI A, ALIYEV R, et al. Adapted-RRT: novel hybrid method to solve three-dimensional path planning problem using sampling and metaheuristic-based algorithms[J]. Neural computing and applications, 2021, 33(22): 15569-15599.
[15] 刘锋, 张严, 陈彦勇, 等. S-57电子海图的快速读取及可视化存储[J]. 舰船科学技术, 2014, 36(7): 108-112.
LIU Feng, ZHANG Yan, CHEN Yanyong, et al. Rapid reading and visual storage of the S-57 electronic chart[J]. Ship science and technology, 2014, 36(7): 108-112.
[16] 扈震, 杨之江, 马振强. 基于S-57标准的电子海图三维可视化[J]. 地球科学, 2010, 35(3): 471-474.
HU Zhen, YANG Zhijiang, MA Zhenqiang. Electronic navigation chart 3D visualization based on S-57[J]. Earth science, 2010, 35(3): 471-474.
[17] HU Hao, ZHOU Yongjian, WANG Tonghao, et al. A multi-task algorithm for autonomous underwater vehicles 3D path planning[C]//2020 3rd International Conference on Unmanned Systems. Harbin: IEEE, 2020: 972-977.
[18] KRIEG M, MOHSENI K. Dynamic modeling and control of biologically inspired vortex ring thrusters for underwater robot locomotion[J]. IEEE transactions on robotics, 2010, 26(3): 542-554.
[19] BIJLSMA S J. Optimal ship routing with ocean current included[J]. Journal of navigation, 2010, 63(3): 565-568.
[20] YANG Yang, LI Juntao, PENG Lingling. Multi-robot path planning based on a deep reinforcement learning DQN algorithm[J]. CAAI transactions on intelligence technology, 2020, 5(3): 177-183.
[21] HAN J. An efficient approach to 3D path planning[J]. Information sciences, 2019, 478: 318-330.
[22] LI Jianxin, CHEN Yiting, ZHAO Xiuniao, et al. An improved DQN path planning algorithm[J]. The journal of supercomputing, 2022, 78(1): 616-639.
[23] QIAO Lei, ZHANG Weidong. Trajectory tracking control of AUVs via adaptive fast nonsingular integral terminal sliding mode control[J]. IEEE transactions on industrial informatics, 2020, 16(2): 1248-1258.
[24] GU Yuwan, ZHU Zhitao, LYU Jidong, et al. DM-DQN: Dueling Munchausen deep Q network for robot path planning[J]. Complex & intelligent systems, 2023, 9(4): 4287-4300.
[25] 赵玉新, 杜登辉, 成小会, 等. 基于强化学习的海洋移动观测网络观测路径规划方法[J]. 智能系统学报, 2022, 17(1): 192-200.
ZHAO Yuxin, DU Denghui, CHENG Xiaohui, et al. Path planning for mobile ocean observation network based on reinforcement learning[J]. CAAI transactions on intelligent systems, 2022, 17(1): 192-200.
[26] WU Keyu, WANG Han, ESFAHANI M A, et al. Achieving real-time path planning in unknown environments through deep neural networks[J]. IEEE transactions on intelligent transportation systems, 2022, 23(3): 2093-2102.
[27] YANG Jian, XU Xin, YIN Dong, et al. A space mapping based 0–1 linear model for onboard conflict resolution of heterogeneous unmanned aerial vehicles[J]. IEEE transactions on vehicular technology, 2019, 68(8): 7455-7465.
[28] PHUNG M D, HA Q P. Safety-enhanced UAV path planning with spherical vector-based particle swarm optimization[J]. Applied soft computing, 2021, 107: 107376.
[29] ZHANG Jiaxin, LIU Meiqin, ZHANG Senlin, et al. Robust global route planning for an autonomous underwater vehicle in a stochastic environment[J]. Frontiers of information technology & electronic engineering, 2022, 23(11): 1658-1672.
[30] WANG Jiankun, JIA Xiao, ZHANG Tianyi, et al. Deep neural network enhanced sampling-based path planning in 3D space[J]. IEEE transactions on automation science and engineering, 2022, 19(4): 3434-3443.
[31] MELO A G, PINTO M F, MARCATO A L M, et al. Dynamic optimization and heuristics based online coverage path planning in 3D environment for UAVs[J]. Sensors, 2021, 21(4): 1108.
[32] TAN Li, ZHANG Hongtao, SHI Jiaqi, et al. A robust multiple unmanned aerial vehicles 3D path planning strategy via improved particle swarm optimization[J]. Computers and electrical engineering, 2023, 111: 108947.
[33] QI Yongqiang, LI Shuai, KE Yi. Three-dimensional path planning of constant thrust unmanned aerial vehicle based on artificial fluid method[J]. Discrete dynamics in nature and society, 2020: 4269193.
相似文献/References:
[1]连传强,徐昕,吴军,等.面向资源分配问题的Q-CF多智能体强化学习[J].智能系统学报,2011,6(2):95.
 LIAN Chuanqiang,XU Xin,WU Jun,et al.Q-CF multiAgent reinforcement learningfor resource allocation problems[J].CAAI Transactions on Intelligent Systems,2011,6():95.
[2]梁爽,曹其新,王雯珊,等.基于强化学习的多定位组件自动选择方法[J].智能系统学报,2016,11(2):149.[doi:10.11992/tis.201510031]
 LIANG Shuang,CAO Qixin,WANG Wenshan,et al.An automatic switching method for multiple location components based on reinforcement learning[J].CAAI Transactions on Intelligent Systems,2016,11():149.[doi:10.11992/tis.201510031]
[3]张文旭,马磊,王晓东.基于事件驱动的多智能体强化学习研究[J].智能系统学报,2017,12(1):82.[doi:10.11992/tis.201604008]
 ZHANG Wenxu,MA Lei,WANG Xiaodong.Reinforcement learning for event-triggered multi-agent systems[J].CAAI Transactions on Intelligent Systems,2017,12():82.[doi:10.11992/tis.201604008]
[4]周文吉,俞扬.分层强化学习综述[J].智能系统学报,2017,12(5):590.[doi:10.11992/tis.201706031]
 ZHOU Wenji,YU Yang.Summarize of hierarchical reinforcement learning[J].CAAI Transactions on Intelligent Systems,2017,12():590.[doi:10.11992/tis.201706031]
[5]张文旭,马磊,贺荟霖,等.强化学习的地-空异构多智能体协作覆盖研究[J].智能系统学报,2018,13(2):202.[doi:10.11992/tis.201609017]
 ZHANG Wenxu,MA Lei,HE Huilin,et al.Air-ground heterogeneous coordination for multi-agent coverage based on reinforced learning[J].CAAI Transactions on Intelligent Systems,2018,13():202.[doi:10.11992/tis.201609017]
[6]徐鹏,谢广明,文家燕,等.事件驱动的强化学习多智能体编队控制[J].智能系统学报,2019,14(1):93.[doi:10.11992/tis.201807010]
 XU Peng,XIE Guangming,WEN Jiayan,et al.Event-triggered reinforcement learning formation control for multi-agent[J].CAAI Transactions on Intelligent Systems,2019,14():93.[doi:10.11992/tis.201807010]
[7]郭宪,方勇纯.仿生机器人运动步态控制:强化学习方法综述[J].智能系统学报,2020,15(1):152.[doi:10.11992/tis.201907052]
 GUO Xian,FANG Yongchun.Locomotion gait control for bionic robots: a review of reinforcement learning methods[J].CAAI Transactions on Intelligent Systems,2020,15():152.[doi:10.11992/tis.201907052]
[8]申翔翔,侯新文,尹传环.深度强化学习中状态注意力机制的研究[J].智能系统学报,2020,15(2):317.[doi:10.11992/tis.201809033]
 SHEN Xiangxiang,HOU Xinwen,YIN Chuanhuan.State attention in deep reinforcement learning[J].CAAI Transactions on Intelligent Systems,2020,15():317.[doi:10.11992/tis.201809033]
[9]殷昌盛,杨若鹏,朱巍,等.多智能体分层强化学习综述[J].智能系统学报,2020,15(4):646.[doi:10.11992/tis.201909027]
 YIN Changsheng,YANG Ruopeng,ZHU Wei,et al.A survey on multi-agent hierarchical reinforcement learning[J].CAAI Transactions on Intelligent Systems,2020,15():646.[doi:10.11992/tis.201909027]
[10]莫宏伟,田朋.基于注意力融合的图像描述生成方法[J].智能系统学报,2020,15(4):740.[doi:10.11992/tis.201910039]
 MO Hongwei,TIAN Peng.An image caption generation method based on attention fusion[J].CAAI Transactions on Intelligent Systems,2020,15():740.[doi:10.11992/tis.201910039]

备注/Memo

收稿日期:2023-11-2。
作者简介:陈世同,讲师,主要研究方向为海洋观探测技术、智能导航。曾获黑龙江省技术发明奖一等奖1次、三等奖1次,获发明专利授权5项,发表学术论文10余篇。E-mail:chenshitong@hrbeu.edu.cn;鲁子瑜,硕士研究生,主要研究方向为强化学习算法、路径规划算法。E-mail:lycxxlzy@hrbeu.edu.cn。
通讯作者:陈世同. E-mail:chenshitong@hrbeu.edu.cn

更新日期/Last Update: 2025-03-05
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com