[1]WANG Ruiting,WANG Haiyan,CHEN Xiao,et al.Hyperspectral image classification based on hybrid convolutional neural network with triplet attention[J].CAAI Transactions on Intelligent Systems,2023,18(2):260-269.[doi:10.11992/tis.202204002]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
18
Number of periods:
2023 2
Page number:
260-269
Column:
学术论文—机器学习
Public date:
2023-05-05
- Title:
-
Hyperspectral image classification based on hybrid convolutional neural network with triplet attention
- Author(s):
-
WANG Ruiting; WANG Haiyan; CHEN Xiao; GENG Xinzhe; LEI Tao
-
School of Electronic Information and Artificial Intelligence, Shaanxi University of Science and Technology, Xi ’an 710021, China
-
- Keywords:
-
remote sensing; hyperspectral image classification; deep learning; feature extraction; dimension reduction; depth-separable convolution; attention mechanism; hybrid convolutional neural network
- CLC:
-
TP751
- DOI:
-
10.11992/tis.202204002
- Abstract:
-
To solve the problems of a high spectral dimension of hyperspectral images and the failure of the existing network to provide multilevel features at the depth level, which affects the classification accuracy and speed, the kernel principal component analysis is used to reduce the dimensionality of hyperspectral images to have the best data differentiation after dimensionality reduction, and a hybrid convolutional neural network with triplet attention (HCTA-Net) model is proposed to design a hybrid model based on 3D, 2D, and 1D CNN to extract the fine spectral–spatial joint features through the fusion of different dimension convolutions. The model also adds depthwise separable convolution into the 2D-CNN to reduce the model parameters and simultaneously introduces a triplet attention mechanism, which uses a three-branch structure to achieve cross-dimensional information interaction to inhibit useless feature information. Experimental results on the Indian Pines, Salinas, and Pavia University datasets show that the proposed model is superior to other comparison methods, and the overall classification accuracy reaches 99.16%, 99.87, and 99.76%, respectively.