[1]王立鹏,王小晨,齐尧,等.基于特征融合及动态背景去除的室内机器人语义VI-SLAM[J].智能系统学报,2024,19(6):1438-1448.[doi:10.11992/tis.202309025]
 WANG Lipeng,WANG Xiaochen,QI Yao,et al.Indoor robot semantic VI-SLAM based on feature fusion and dynamic background removal[J].CAAI Transactions on Intelligent Systems,2024,19(6):1438-1448.[doi:10.11992/tis.202309025]
点击复制

基于特征融合及动态背景去除的室内机器人语义VI-SLAM

参考文献/References:
[1] SüNDERHAUF N, PHAM T T, LATIF Y, et al. Meaningful maps with object-oriented semantic mapping[C]//2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE, 2017: 5079-5085.
[2] WHELAN T, SALAS-MORENO R F, GLOCKER B, et al. ElasticFusion: real-time dense SLAM and light source estimation[J]. The international journal of robotics research, 2016, 35(14): 1697-1716.
[3] MCCORMAC J, HANDA A, DAVISON A, et al. SemanticFusion: dense 3D semantic mapping with convolutional neural networks[C]//2017 IEEE International Conference on Robotics and Automation. New York: IEEE, 2017: 4628-4635.
[4] WANG Yutong, XU Bin, FAN Wei, et al. QISO-SLAM: object-oriented SLAM using dual quadrics as landmarks based on instance segmentation[J]. IEEE robotics and automation letters, 2023, 8(4): 2253-2260.
[5] SALAS-MORENO R F, NEWCOMBE R A, STRASDAT H, et al. SLAM: simultaneous localisation and mapping at the level of objects[C]//2013 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2013: 1352-1359.
[6] DAME A, PRISACARIU V A, REN C Y, et al. Dense reconstruction using 3D object shape priors[C]//2013 IEEE Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2013: 1288-1295.
[7] YU Chao, LIU Zuxin, LIU Xinjun, et al. DS-SLAM: a semantic visual SLAM towards dynamic environments[C]//2018 IEEE/RSJ International Conference on Intelligent Robots and Systems. New York: IEEE, 2018: 1168-1174.
[8] KANEKO M, IWAMI K, OGAWA T, et al. Mask-SLAM: robust feature-based monocular SLAM by masking using semantic segmentation[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. New York: IEEE, 2018: 371-378.
[9] WANG Kai, LIN Yimin, WANG Luowei, et al. A unified framework for mutual improvement of SLAM and semantic segmentation[C]//2019 International Conference on Robotics and Automation. Montreal: IEEE, 2019: 5224-5230.
[10] BESCOS B, FáCIL J M, CIVERA J, et al. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes[J]. IEEE robotics and automation letters, 2018, 3(4): 4076-4083.
[11] BESCOS B, CAMPOS C, TARDóS J D, et al. DynaSLAM II: tightly-coupled multi-object tracking and SLAM[J]. IEEE robotics and automation letters, 2021, 6(3): 5191-5198.
[12] LI Ao, WANG Jikai, XU Meng, et al. DP-SLAM: a visual SLAM with moving probability towards dynamic environments[J]. Information sciences, 2021, 556: 128-142.
[13] ZHAO Yao, XIONG Zhi, ZHOU Shuailin, et al. KSF-SLAM: a key segmentation frame based semantic SLAM in dynamic environments[J]. Journal of intelligent & robotic systems, 2022, 105(1): 3.
[14] HU Zhangfang, ZHAO Jiang, LUO Yuan, et al. Semantic SLAM based on improved DeepLabv3+ in dynamic scenarios[J]. IEEE access, 2022, 10: 21160-21168.
[15] HUANG Shisheng, MA Zeyu, MU Taijiang, et al. Supervoxel convolution for online 3D semantic segmentation[J]. ACM transactions on graphics, 2021, 40(3): 1-15.
[16] MCCORMAC J, CLARK R, BLOESCH M, et al. Fusion: volumetric object-level SLAM[C]//2018 International Conference on 3D Vision. New York: IEEE, 2018: 32-41.
[17] HE Kaiming, GKIOXARI G, DOLLáR P, et al. Mask R-CNN[C]//2017 IEEE International Conference on Computer Vision. New York: IEEE, 2017: 2980-2988.
[18] NAKAJIMA Y, SAITO H. Efficient object-oriented semantic mapping with object detector[J]. IEEE access, 2019, 7: 3206-3213.
[19] PHAM Q H, HUA B S, NGUYEN T, et al. Real-time progressive 3D semantic segmentation for indoor scenes[C]//2019 IEEE Winter Conference on Applications of Computer Vision. Waikoloa: IEEE, 2019: 1089-1098.
[20] TATENO K, TOMBARI F, LAINA I, et al. CNN-SLAM: real-time dense monocular SLAM with learned depth prediction[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 6565-6574.
[21] CLARK R, WANG Sen, WEN Hongkai, et al. VINet: visual-inertial odometry as a sequence-to-sequence learning problem [C]// Proceedings of the AAAI Conference on Artificial Intelligence. Palo Alto: AAAI, 2017: 3995-4001.
[22] DETONE D, MALISIEWICZ T, RABINOVICH A. SuperPoint: self-supervised interest point detection and description[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City : IEEE, 2018: 337-33712.
[23] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs[C]//IEEE Transactions on Pattern Analysis and Machine Intelligence. New York: IEEE, 2018: 834-848.
[24] SONG Chengqun, ZENG Bo, SU Tong, et al. Data association and loop closure in semantic dynamic SLAM using the table retrieval method[J]. Applied intelligence, 2022, 52(10): 11472-11488.
[25] QIAN Zhentian, FU Jie, XIAO Jing. Towards accurate loop closure detection in semantic SLAM with 3D semantic covisibility graphs[J]. IEEE robotics and automation letters, 2022, 7(2): 2455-2462.

备注/Memo

收稿日期:2023-9-13。
基金项目:黑龙江省教育科学规划2023年度重点课题(GJB1423059);国家自然科学基金项目(62173103);黑龙江省自然科学基金项目(LH2024F037);中央高校基本科研业务费专项(3072024XX0403).
作者简介:王立鹏,副教授,博士生导师,主要研究方向为语义SLAM、非线性控制、复杂系统建模。主持国家自然科学基金面上项目、青年项目、黑龙江省自然科学基金、民品横向项目8项。获授权发明专利9项。获省部级科技进步特等奖、一等奖。发表学术论文30余 篇。E-mail:wanglipeng@hrbeu.edu.cn;王小晨,硕士研究生,主要研究方向为多机器人协同、视觉SLAM。E-mail:13593593764@163.com;齐尧,硕士研究生,主要研究方向为深度学习、视觉惯性SLAM。E-mail:qiyao0208@163.com。
通讯作者:王立鹏. E-mail:wanglipeng@hrbeu.edu.cn

更新日期/Last Update: 2024-11-05
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com