[1]LI Kangbin,ZHU Qidan,MU Jinyou,et al.Automatic ship berthing path-planning method based on improved DDQN[J].CAAI Transactions on Intelligent Systems,2025,20(1):73-80.[doi:10.11992/tis.202401005]
Copy

Automatic ship berthing path-planning method based on improved DDQN

References:
[1] 刘志林, 苑守正, 郑林熇, 等. 船舶自动靠泊技术的发展现状和趋势[J]. 中国造船, 2021, 62(4): 293-304.
LIU Zhilin, YUAN Shouzheng, ZHENG Linhe, et al. Development status and trend of ship automatic berthing technology[J]. Shipbuilding of China, 2021, 62(4): 293-304.
[2] 李国帅, 张显库, 张安超. 智能船舶靠泊技术研究热点与趋势[J]. 中国舰船研究, 2024, 19(1): 3-14.
LI Guoshuai, ZHANG Xianku, ZHANG Anchao. Research hotspots and tendency of intelligent ship berthing technology[J]. Chinese journal of ship research, 2024, 19(1): 3-14.
[3] 包政凯, 朱齐丹, 刘永超. 满秩分解最小二乘法船舶航向模型辨识[J]. 智能系统学报, 2022, 17(1): 137-143.
BAO Zhengkai, ZHU Qidan, LIU Yongchao. Ship heading model identification based on full rank decomposition least square method[J]. CAAI transactions on intelligent systems, 2022, 17(1): 137-143.
[4] WANG Shaobo, ZHANG Yingjun, LI Lianbo. A collision avoidance decision-making system for autonomous ship based on modified velocity obstacle method[J]. Ocean engineering, 2020, 215: 107910.
[5] LIU Xinzhao, REN Junsheng, WANG Hui. Optimal energy trajectory planning and control for automatic ship berthing[C]//2020 39th Chinese Control Conference. Shenyang: IEEE, 2020: 1420-1425.
[6] GAO Xuanyu, DONG Yitao, HAN Yi. An optimized path planning method for container ships in Bohai Bay based on improved deep Q-learning[J]. IEEE access, 2023, 11: 91275-91292.
[7] GUO Siyu, ZHANG Xiuguo, ZHENG Yisong, et al. An autonomous path planning model for unmanned ships based on deep reinforcement learning[J]. Sensors, 2020, 20(2): 426.
[8] HE Zhibo, LIU Chenguang, CHU Xiumin, et al. Dynamic anti-collision A-star algorithm for multi-ship encounter situations[J]. Applied ocean research, 2022, 118: 102995.
[9] LIU Chenguang, MAO Qingzhou, CHU Xiumin, et al. An improved A-star algorithm considering water current, traffic separation and berthing for vessel path planning[J]. Applied sciences, 2019, 9(6): 1057.
[10] CHEN Chen, CHEN Xianqiao, MA Feng, et al. A knowledge-free path planning approach for smart ships based on reinforcement learning[J]. Ocean engineering, 2019, 189: 106299.
[11] 周治国, 余思雨, 于家宝, 等. 面向无人艇的T-DQN智能避障算法研究[J]. 自动化学报, 2023, 49(8): 1645-1655.
ZHOU Zhiguo, YU Siyu, YU Jiabao, et al. Research on T-DQN intelligent obstacle avoidance algorithm of unmanned surface vehicle[J]. Acta automatica sinica, 2023, 49(8): 1645-1655.
[12] SILVA JUNIOR A G D, SANTOS D H D, NEGREIROS A P F, et al. High-level path planning for an autonomous sailboat robot using Q-learning[J]. Sensors, 2020, 20(6): 1550.
[13] 邢博闻, 张昭夷, 王世明, 等. 基于深度强化学习的多无人艇协同目标搜索算法[J]. 兵器装备工程学报, 2023, 44(11): 118-125.
XING Bowen, ZHANG Zhaoyi, WANG Shiming, et al. Multi-USV cooperative target search algorithm based on deep reinforcement learning[J]. Journal of ordnance equipment engineering, 2023, 44(11): 118-125.
[14] 曹景祥, 刘其成. 基于深度强化学习的路径规划算法研究[J]. 计算机应用与软件, 2022, 39(11): 231-237.
CAO Jingxiang, LIU Qicheng. Research on path planning algorithm based on deep reinforcement learning[J]. Computer applications and software, 2022, 39(11): 231-237.
[15] 闫皎洁, 张锲石, 胡希平. 基于强化学习的路径规划技术综述[J]. 计算机工程, 2021, 47(10): 16-25.
YAN Jiaojie, ZHANG Qieshi, HU Xiping. Review of path planning techniques based on reinforcement learning[J]. Computer engineering, 2021, 47(10): 16-25.
[16] 邓修朋, 崔建明, 李敏, 等. 深度强化学习在机器人路径规划中的应用[J]. 电子测量技术, 2023, 46(6): 1-8.
DENG Xiupeng, CUI Jianming, LI Min, et al. Application of deep reinforcement learning in robot path planning[J]. Electronic measurement technology, 2023, 46(6): 1-8.
[17] LI Lingyu, WU Defeng, HUANG Youqiang, et al. A path planning strategy unified with a COLREGS collision avoidance function based on deep reinforcement learning and artificial potential field[J]. Applied ocean research, 2021, 113: 102759.
[18] ZHANG Xinyu, WANG Chengbo, LIU Yuanchang, et al. Decision-making for the autonomous navigation of maritime autonomous surface ships based on scene division and deep reinforcement learning[J]. Sensors, 2019, 19(18): 4055.
[19] ZHOU Xinyuan, WU Peng, ZHANG Haifeng, et al. Learn to navigate: cooperative path planning for unmanned surface vehicles using deep reinforcement learning[J]. IEEE access, 2019, 7: 165262-165278.
[20] OZTURK U , AKDAG M , AYABAKAN T . A review of path planning algorithms in maritime autonomous surface ships: navigation safety perspective[J]. Ocean engineering, 2022, 251: 111010.
[21] MCCUE L. Handbook of marine craft hydrodynamics and motion control[bookshelf[J]. IEEE control systems magazine, 2016, 36(1): 78-79.
[22] SKJETNE R . The maneuvering problem[EB/OL]. (2005-03-15)[2024-01-03]. https://www.researchgate.net/publication/236651127_The_Maneuvering_Problem.
[23] SONG Zhaofeng, ZHANG Jinfen, WU Da, et al. A novel path planning algorithm for ships in dynamic current environments[J]. Ocean engineering, 2023, 288: 116091.
[24] GAO Ning, QIN Zhijin, JING Xiaojun, et al. Anti-intelligent UAV jamming strategy via deep Q-networks[J]. IEEE transactions on communications, 2020, 68(1): 569-581.
[25] DEGRIS T, PILARSKI P M, SUTTON R S. Model-Free reinforcement learning with continuous action in practice[C]//2012 American Control Conference. Montreal: IEEE, 2012: 2177-2182.
[26] HIGO Y, SAKANO M, NOBE H, et al. Development of trajectory-tracking maneuvering system for automatic berthing/unberthing based on double deep Q-network and experimental validation with an actual large ferry[J]. Ocean engineering, 2023, 287: 115750.
[27] VAN HASSELT H, GUEZ A, SILVER D. Deep reinforcement learning with double Q-learning[C]//Proceedings of the AAAI conference on artificial intelligence. Arizona: AAAI Press, 2016: 2094-2100.
[28] ZHANG Fei, GU Chaochen, YANG Feng. An improved algorithm of robot path planning in complex environment based on double DQN[M]//Lecture Notes in Electrical Engineering. Singapore: Springer Singapore, 2021: 303-313.
[29] YANG Xiaofei, SHI Yilun, LIU Wei, et al. Global path planning algorithm based on double DQN for multi-tasks amphibious unmanned surface vehicle[J]. Ocean engineering, 2022, 266: 112809.
[30] ZHANG Jiaqi, JIAO Xiaohong, YANG Chao. A double-deep Q-network-based energy management strategy for hybrid electric vehicles under variable driving cycles[J]. Energy technology, 2021, 9(2): 2000770.
Similar References:

Memo

-

Last Update: 2025-01-05

Copyright © CAAI Transactions on Intelligent Systems