[1]WANG Wenqing,ZHANG Xiaoqiao,HE Ji,et al.Pansharpening based on hybrid dual-branch convolutional and graph convolutional neural networks[J].CAAI Transactions on Intelligent Systems,2025,20(3):649-657.[doi:10.11992/tis.202401003]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
20
Number of periods:
2025 3
Page number:
649-657
Column:
学术论文—机器感知与模式识别
Public date:
2025-05-05
- Title:
-
Pansharpening based on hybrid dual-branch convolutional and graph convolutional neural networks
- Author(s):
-
WANG Wenqing1; 2; ZHANG Xiaoqiao1; HE Ji1; LIU Han1; 2; LIU Ding1; 2
-
1. School of Automation and Information Engineering, Xi’an University of Technology, Xi’an 710048, China;
2. Shaanxi Key Laboratory of Complex System Control and Intelligent Information Processing, Xi’an University of Technology, Xi’an 710048, China
-
- Keywords:
-
image fusion; remote sensing; image processing; deep learning; convolutional neural network; machine learning; feature extraction; image reconstruction
- CLC:
-
TP751
- DOI:
-
10.11992/tis.202401003
- Abstract:
-
The pansharpening of multispectral images represents a trending research topic in remote sensing image processing and interpretation. Moreover, compared with traditional pansharpening methods, deep learning-based pansharpening methods mainly extract deep features, thereby greatly improving the quality of fused images. Here, a method based on hybrid dual-branch convolutional neural network (CNN) and graph convolutional neural network (GCNN) is proposed to simultaneously extract spectral information, spatial information, and non-geometric structural information and improve the spatial and spectral resolutions of fused images. This hybrid method comprises the construction of a multi-resolution analysis fusion framework, followed by the construction of a feature extraction module, a feature fusion module, and an image reconstruction module based on deep neural networks. First, the hybrid dual-branch network module was constructed using 2D and 3D CNNs that focus on extracting spatial and spectral features, respectively. Second, GCNN was introduced to capture the spatial relationships of the nodes in the graph structure of the image and integrate non-local information. Afterward, the spatial, spectral, and non-geometric features extracted from multispectral and panchromatic images were fused by the feature fusion module. Finally, the fused features were input into the image reconstruction network to reconstruct the high-quality multispectral images. The proposed method was experimentally validated using GeoEye-1 and IKONOS remote sensing data. Compared with other methods, the experimental results obtained by the proposed method reveal its excellent performance in subjective and objective vision evaluations.