[1]ZHANG Xinyu,ZOU Zhenhong,LI Zhiwei,et al.Deep multi-modal fusion in object detection for autonomous driving[J].CAAI Transactions on Intelligent Systems,2020,15(4):758-771.[doi:10.11992/tis.202002010]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
15
Number of periods:
2020 4
Page number:
758-771
Column:
吴文俊人工智能科学技术奖论坛
Public date:
2020-07-05
- Title:
-
Deep multi-modal fusion in object detection for autonomous driving
- Author(s):
-
ZHANG Xinyu1; 2; ZOU Zhenhong1; 2; LI Zhiwei1; 2; LIU Huaping3; LI Jun1; 2
-
1. State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing 100084, China;
2. School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China;
3. Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
-
- Keywords:
-
data fusion; object detection; autonomous driving; deep learning; multimodal; perception; computer vision; sensor; survey
- CLC:
-
TP274;TP212
- DOI:
-
10.11992/tis.202002010
- Abstract:
-
In autonomous driving, there has been an increasing interest in utilizing multiple sensors to improve the accuracy of object detection models. Accordingly, the research on data fusion has important academic and application value. This paper summarizes the data fusion methods in deep object detection models of autonomous driving in recent years. The paper first introduces the development of deep object detection and data fusion in autonomous driving, as well as existing researches and reviews, then expounds from three aspects of multi-modal object detection, fusion levels and calculation methods, comprehensively showing the cutting-edge progress in this field. In addition, this paper proposes a rationality analysis of data fusion from another three perspectives: methods, robustness and redundancy. Finally, open issues are discussed, and the challenges, strategy and prospects are summarized.