[1]JIA Chen,LIU Huaping,XU Xinying,et al.Multi-modal information fusion based on broad learning method[J].CAAI Transactions on Intelligent Systems,2019,14(1):150-157.[doi:10.11992/tis.201803022]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
14
Number of periods:
2019 1
Page number:
150-157
Column:
学术论文—机器学习
Public date:
2019-01-05
- Title:
-
Multi-modal information fusion based on broad learning method
- Author(s):
-
JIA Chen1; LIU Huaping2; 3; XU Xinying1; SUN Fuchun2; 3
-
1. College of Electrical and Power Engineering, Taiyuan University of Technology, Taiyuan 030600, China;
2. Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China;
3. State Key Laboratory of Intelligent Technology and Systems, Tsinghua University, Beijing 100084, China
-
- Keywords:
-
broad learning method; multi-modal fusion; correlation analysis; feature extraction; nonlinear transformation; object recognition; neural networks; RGB-D images classification
- CLC:
-
TP391
- DOI:
-
10.11992/tis.201803022
- Abstract:
-
Multi-modal machine learning solves the fusion problem that arises in data with different modalites by effectively learning their rich characteristics. Considering the differences between various modalities, we propose a framework that can learn and fuse two kinds of modal characteristics based on the broad learning method. This method first extracts different abstract characteristics, then represents the high-dimension features in the same space to determine their correlation. We obtain a final representation of these characteristics by nonlinear fusion and inputs these characteristics into a classifier for target recognition. Relevant experiments are conducted on the Cornell Grasping Dataset and the Washington RGB-D Object Dataset, and our experimental results confirm that, compared with traditional fusion methods, the proposed algorithm has greater stability and rapidity.