[1]WANG Fanchao,DING Shifei.Image super-resolution reconstruction based on widely activated deep residual networks[J].CAAI Transactions on Intelligent Systems,2022,17(2):440-446.[doi:10.11992/tis.202106023]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
17
Number of periods:
2022 2
Page number:
440-446
Column:
吴文俊人工智能科学技术奖论坛
Public date:
2022-03-05
- Title:
-
Image super-resolution reconstruction based on widely activated deep residual networks
- Author(s):
-
WANG Fanchao1; DING Shifei1; 2
-
1. School of Computer Science and Technology, China University of Mining and Technology, Xuzhou 221116, China;
2. Mine Digitization Engineering Research Center of Ministry of Education of the People’s Republic of China, Xuzhou 221116, China
-
- Keywords:
-
deep learning; super-resolution; extensive activation; perceptual loss; feature reconstruction; peak signal-to-noise ratio; structural similarity; visual experience
- CLC:
-
TP391.41
- DOI:
-
10.11992/tis.202106023
- Abstract:
-
To obtain good image evaluation indexes, the mean squared error loss is used as an objective optimization function in image super-resolution technologies combined with the deep learning method. However, most constructed images cannot meet the visual experience requirement due to the serious loss of high-frequency signals and fuzzy texture edges. In response to the above problems, in this paper, we propose a super-resolution model for a widely activated deep residual network combining perceptual loss. A new loss function is formed by introducing perceptual and adversarial losses and is optimized by adjusting the weight of different losses. The loss function is optimized to improve the feature reconstruction ability of low-resolution images and highly restore the high-frequency information missing from the images. Two internationally recognized evaluation indicators, namely, peak signal-to-noise ratio and structural similarity, are selected as objective evaluation criteria. A comparative analysis is performed on different datasets, and the images produced are subjected to direct and subjective observations. The results show that the performance of the proposed method is improved in different aspects in comparison with the compared models. Hence, after the introduction of perceptual loss, the model can effectively reconstruct the texture details of low-resolution images and offer an outstanding visual experience.