[1]QU Haicheng,WANG Yuping,XIE Mengting,et al.Infrared and visible image fusion combined with brightness perception and dense convolution[J].CAAI Transactions on Intelligent Systems,2022,17(3):643-652.[doi:10.11992/tis.202104004]
Copy

Infrared and visible image fusion combined with brightness perception and dense convolution

References:
[1] 刘帅, 李士进, 冯钧. 多特征融合的遥感图像分类[J]. 数据采集与处理, 2014, 29(1): 108–115
LIU Shuai, LI Shijin, FENG Jun. Remote sensing image classification based on adaptive fusion of multiple features[J]. Journal of data acquisition and processing, 2014, 29(1): 108–115
[2] 孙洁, 黄承宁, 王玉祥. 基于多模态聚类及决策融合的SAR图像分类方法[J]. 现代雷达, 2020, 42(12): 66–71
SUN Jie, HUANG Chengning, WANG Yuxiang. Target classification of SAR images based on clustering of multiple modes and decision fusion[J]. Modern radar, 2020, 42(12): 66–71
[3] 白玉, 侯志强, 刘晓义, 等. 基于可见光图像和红外图像决策级融合的目标检测算法[J]. 空军工程大学学报(自然科学版), 2020, 21(6): 53–59,100
BAI Yu, HOU Zhiqiang, LIU Xiaoyi, et al. An object detection algorithm based on decision-level fusion of visible light image and infrared image[J]. Journal of Air Force Engineering University (natural science edition), 2020, 21(6): 53–59,100
[4] 李舒涵, 许宏科, 武治宇. 基于红外与可见光图像融合的交通标志检测[J]. 现代电子技术, 2020, 43(3): 45–49
LI Shuhan, XU Hongke, WU Zhiyu. Traffic sign detection based on infrared and visible image fusion[J]. Modern electronics technique, 2020, 43(3): 45–49
[5] 马瞳宇, 崔静, 储鼎. 基于WebGL的实景三维场景与视频监控图像融合技术研究[J]. 测绘与空间地理信息, 2020, 43(S1): 80–83
MA Tongyu, CUI Jing, CHU Ding. Research on fusion technology of real 3D scene and video surveillance image based on WebGL[J]. Geomatics & spatial information technology, 2020, 43(S1): 80–83
[6] BAVIRISETTI D P, DHULI R. Fusion of infrared and visible sensor images based on anisotropic diffusion and Karhunen-loeve transform[J]. IEEE sensors journal, 2016, 16(1): 203–209.
[7] Kumar B K S. Image fusion based on pixel significance using cross bilateral filter[J]. Signal, image and video processing, 2015, 9(5): 1193–1204.
[8] MA Jinlei, ZHOU Zhiqiang, WANG Bo, et al. Infrared and visible image fusion based on visual saliency map and weighted least square optimization[J]. Infrared physics & technology, 2017, 82: 8–17.
[9] LIU Yu, WANG Zengfu. Simultaneous image fusion and denoising with adaptive sparse representation[J]. IET image processing, 2015, 9(5): 347–357.
[10] Burt P, Adelson E. The laplacian pyramid as a compact image code[J]. IEEE transactions on communications, 1983, 31(4): 532–540.
[11] LI Shutao, YANG Bin, HU Jianwen. Performance comparison of different multi-resolution transforms for image fusion[J]. Information fusion, 2011, 12(2): 74–84.
[12] LI Shutao, YIN Haitao, FANG Leyuan. Group-sparse representation with dictionary learning for medical image denoising and fusion[J]. IEEE transactions on biomedical engineering, 2012, 59(12): 3450–3459.
[13] MA Jiayi, MA Yong, LI Chang. Infrared and visible image fusion methods and applications: a survey[J]. Information fusion, 2019, 45: 153–178.
[14] PRABHAKAR K R, SRIKAR V S, BABU R V. DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image Pairs[C]//Proceedings of 2017 IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 4724–4732.
[15] MA Jiayi, YU Wei, LIANG Pengwei, et al. FusionGAN: a generative adversarial network for infrared and visible image fusion[J]. Information fusion, 2019, 48: 11–26.
[16] LI Hui, WU Xiaojun, KITTLER J. Infrared and visible image fusion using a deep learning framework[C]//Proceedings of 24th International Conference on Pattern Recognition (ICPR). Beijing: IEEE, 2018: 2705–2710.
[17] LI Hui, WU Xiaojun. DenseFuse: A fusion approach to infrared and visible images[J]. IEEE transactions on image processing, 2019, 28(5): 2614–2623.
[18] XU Han, MA Jiayi, JIANG Junjun, et al. U2Fusion: a unified unsupervised image fusion network[J]. IEEE transactions on pattern analysis and machine intelligence, 2022, 44(1): 502–518.
[19] HUANG Gao, LIU Zhuang, VAN DER MAATEN L, et al. Densely connected convolutional networks[C]//Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). Honolulu: IEEE, 2017: 2261–2269.
[20] 刘万军, 佟畅, 曲海成. 空洞卷积与注意力融合的对抗式图像阴影去除算法[J]. 智能系统学报, 2021, 16(6): 1081–1089
LIU Wanjun, TONG Chang, QU Haicheng. An antagonistic image shadow removal algorithm based on dilated convolution and attention mechanism[J]. CAAI transactions on intelligent systems, 2021, 16(6): 1081–1089
[21] Alexander T. TNO image fusion dataset[EB/OL]. (2018-09-15)[2021-04-02]https://flgshare.com/articles/TNO Image Fusion Dataset/1008029.
[22] HWANG S, PARK J, KIM N, et al. Multispectral pedestrian detection: benchmark dataset and baseline[C]//Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Boston: IEEE, 2015: 1037–1045.
[23] ROBERTS J W, VAN AARDT J, AHMED F B. Assessment of image fusion procedures using entropy, image quality, and multispectral classification[J]. Journal of applied remote sensing, 2008, 2(1): 023522.
[24] QU Guihong, ZHANG Dali, YAN Pingfan. Information measure for performance of image fusion[J]. Electronics letters, 2002, 38(7): 313–315.
[25] ASLANTAS V, BENDES E. A new image quality metric for image fusion: the sum of the correlations of differences[J]. AEU-international journal of electronics and communications, 2015, 69(12): 1890–1896.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems