[1]ZHANG Pengpeng,WEI Changyun,ZHANG Kairui,et al.Self-learning approach to control parameter adjustment for quadcopter landing on a moving platform[J].CAAI Transactions on Intelligent Systems,2022,17(5):931-940.[doi:10.11992/tis.202107040]
Copy

Self-learning approach to control parameter adjustment for quadcopter landing on a moving platform

References:
[1] LIU P, CHEN A Y, HUANG Yinnan, et al. A review of rotorcraft Unmanned Aerial Vehicle (UAV) developments and applications in civil engineering[J]. Smart structures and systems, 2014, 13(6): 1065–1094.
[2] TSOUROS D, BIBI S, SARIGIANNIDIS P. A review on UAV-based applications for precision agriculture[J]. Information (Switzerland), 2019, 10(11): 349.
[3] REN H, ZHAO Y, XIAO W, et al. A review of UAV monitoring in mining areas: current status and future perspectives[J]. International journal of coal science & technology, 2019, 6(3): 320–333.
[4] MICHAEL N, SHEN Shaojie, MOHTA K, et al. Collaborative mapping of an earthquake-damaged building via ground and aerial robots[J]. Journal of field robotics, 2012, 29(5): 832–841.
[5] 王华鲜, 华容, 刘华平, 等. 无人机群多目标协同主动感知的自组织映射方法[J]. 智能系统学报, 2020, 15(3): 609?614.
WANG Huaxian, HUA Rong, LIU Huaping, et al. Self-organizing feature map method for multi-target active perception of unmanned aerial vehicle systems[J]. CAAI transactions on intelligent systems, 2020, 15(3): 609?614.
[6] BACA T, STEPAN P, SPURNY V, et al. Autonomous landing on a moving vehicle with an unmanned aerial vehicle[J]. Journal of field robotics, 2019, 36(5): 874–891.
[7] TALHA M, ASGHAR F, ROHAN A, et al. Fuzzy logic-based robust and autonomous safe landing for UAV quadcopter[J]. Arabian journal for science and engineering, 2019, 44(3): 2627–2639.
[8] FENG Yi, ZHANG Cong, BAEK S, et al. Autonomous landing of a UAV on a moving platform using model predictive control[J]. Drones, 2018, 2(4): 34.
[9] RODRIGUEZ-RAMOS A, SAMPEDRO C, BAVLE H, et al. A deep reinforcement learning technique for vision-based autonomous multirotor landing on a moving platform[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid, IEEE, 2018: 1010?1017.
[10] SHAKER M, SMITH M N R, YUE Shigang, et al. Vision-based landing of a simulated unmanned aerial vehicle with fast reinforcement learning[C]//2010 International Conference on Emerging Security Technologies. Canterbury, IEEE, 2010: 183?188.
[11] RODRIGUEZ-RAMOS A, SAMPEDRO C, BAVLE H, et al. A deep reinforcement learning strategy for UAV autonomous landing on a moving platform[J]. Journal of intelligent & robotic systems, 2019, 93(1/2): 351–366.
[12] LEE S, SHIM T, KIM S, et al. Vision-based autonomous landing of a multi-copter unmanned aerial vehicle using reinforcement learning[C]//2018 International Conference on Unmanned Aircraft Systems (ICUAS). Dallas, IEEE, 2018: 108?114.
[13] ARULKUMARAN K, DEISENROTH M, BRUNDAGE M, et al. Deep reinforcement learning: a brief survey[J]. IEEE signal processing magazine, 2017, 34: 26–38.
[14] HESSEL M, SOYER H, ESPEHOLT L, et al. Multi-task deep reinforcement learning with PopArt[J]. Proceedings of the AAAI conference on artificial intelligence, 2019, 33: 3796–3803.
[15] SEDIGHIZADEH M, REZAZADEH A. Adaptive PID controller based on reinforcement learning for wind turbine control[J]. World academy of science, engineering and technology, international journal of computer, electrical, automation, control and information engineering, 2008, 2: 124–129.
[16] WANG Shuti, YIN Xunhe, LI Peng, et al. Trajectory tracking control for mobile robots using reinforcement learning and PID[J]. Iranian journal of science and technology, transactions of electrical engineering, 2020, 44(3): 1059–1068.
[17] ASADI K, KALKUNTE SURESH A, ENDER A, et al. An integrated UGV-UAV system for construction site data collection[J]. Automation in construction, 2020, 112: 103068.
[18] ERGINER B, ALTUG E. Modeling and PD control of a quadrotor VTOL vehicle[C]//2007 IEEE Intelligent Vehicles Symposium. Istanbul, IEEE, 2007: 894?899.
[19] LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning[EB/OL]. (2015?01?01)[2021?01?01]. https: //arxiv. org/abs/1509.02971.
[20] CARLUCHO I, DE PAULA M, VILLAR S A, et al. Incremental Q-learning strategy for adaptive PID control of mobile robots[J]. Expert systems with applications, 2017, 80: 183–199.
[21] CHOI J, CHEON D, LEE J. Robust landing control of a quadcopter on a slanted surface[J]. International journal of precision engineering and manufacturing, 2021, 22(6): 1147–1156.
[22] KIM J, JUNG Y, LEE D, et al. Landing control on a mobile platform for multi-copters using an omnidirectional image sensor[J]. Journal of intelligent & robotic systems, 2016, 84(1/2/3/4): 529–541.
[23] CELEMIN C, RUIZ-DEL-SOLAR J. An interactive framework for learning continuous actions policies based on corrective feedback[J]. Journal of intelligent & robotic systems, 2019, 95(1): 77–97.
[24] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016, 529(7587): 484?489.
[25] GRIGORESCU S, TRASNEA B, COCIAS T, et al. A survey of deep learning techniques for autonomous driving[J]. Journal of field robotics, 2020, 37(3): 362?386.
[26] HESSEL M, MODAYIL J, VAN HASSELT H, et al. Rainbow: combining improvements in deep reinforcement learning[EB/OL].(2017?01?01)[2021?01?01]. https: //arxiv. org/abs/1710.02298.
[27] SUTTON R S, BARTO A G. Reinforcement learning: an introduction[M]. Cambridge, Mass: MIT Press, 1998.
[28] WATKINS C J C H, DAYAN P. Q-learning[J]. Machine learning, 1992, 8(3/4): 279–292.
[29] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Human-level control through deep reinforcement learning[J]. Nature, 2015, 518(7540): 529?533.
[30] VAN HASSELT H, GUEZ A, SILVER D. Deep reinforcement learning with double Q-learning[EB/OL]. (2015?05?01)[2020?12?20].https://arxiv. org/abs/1509.06461v3.
[31] WANG Z, SCHAUL T, HESSEL M, et al. Dueling network architectures for deep reinforcement learning[C]// International conference on machine learning. PMLR, 2016: 1995?2003.
[32] KOUBAA A. Robot operating system (ROS): The complete reference[M]. volume 1. Cham: Springer, 2016.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems