[1]王亚茹,杨春旺,屈卓,等.双线性特征融合和门控循环单元质量聚合的图像质量评价[J].智能系统学报,2025,20(4):946-957.[doi:10.11992/tis.202407028]
 WANG Yaru,YANG Chunwang,QU Zhuo,et al.Image quality assessment based on bilinear feature fusion and gate recurrent unit quality polymerization[J].CAAI Transactions on Intelligent Systems,2025,20(4):946-957.[doi:10.11992/tis.202407028]
点击复制

双线性特征融合和门控循环单元质量聚合的图像质量评价

参考文献/References:
[1] 方玉明, 眭相杰, 鄢杰斌, 等. 无参考图像质量评价研究进展[J]. 中国图象图形学报, 2021, 26(2): 265-286.
FANG Yuming, SUI Xiangjie, YAN Jiebin, et al. Progress in no-reference image quality assessment[J]. Journal of image and graphics, 2021, 26(2): 265-286.
[2] 曹玉东, 刘海燕, 贾旭, 等. 基于深度学习的图像质量评价方法综述[J]. 计算机工程与应用, 2021, 57(23): 27-36.
CAO Yudong, LIU Haiyan, JIA Xu, et al. Overview of image quality assessment method based on deep learning[J]. Computer engineering and applications, 2021, 57(23): 27-36.
[3] HU Runze, LIU Yutao, GU Ke, et al. Toward a No-reference quality metric for camera-captured images[J]. IEEE transactions on cybernetics, 2023, 53(6): 3651-3664.
[4] 秦小倩, 杜浩. 基于自然场景统计的图像质量评价算法[J]. 现代电子技术, 2023, 46(23): 36-42.
QIN Xiaoqian, DU Hao. Image quality assessment algorithm based on natural scene statistics[J]. Modern electronics technique, 2023, 46(23): 36-42.
[5] 李沛钊, 王同罕, 贾惠珍, 等. USformer-Net: 基于U-Net和Swin Transformer的脑部MRI图像质量评价方法[J]. 现代电子技术, 2024, 47(7): 1-7.
LI Peizhao, WANG Tonghan, JIA Huizhen, et al. USformer-Net: brain MRI image quality assessment fusing U-Net and Swin Transformer[J]. Modern electronics technique, 2024, 47(7): 1-7.
[6] 江本赤, 卞仕磊, 史晨阳, 等. 基于色貌尺度相位一致性的全参考图像质量评价[J]. 光学精密工程, 2023, 31(10): 1509-1521.
JIANG Benchi, BIAN Shilei, SHI Chenyang, et al. Full reference image quality assessment based on color appearance-based phase consistency[J]. Optics and precision engineering, 2023, 31(10): 1509-1521.
[7] 赵文清, 许丽娇, 陈昊阳, 等. 多层特征融合与语义增强的盲图像质量评价[J]. 智能系统学报, 2024, 19(1): 132-141.
ZHAO Wenqing, XU Lijiao, CHEN Haoyang, et al. Blind image quality assessment based on multi-level feature fusion and semantic enhancement[J]. CAAI transactions on intelligent systems, 2024, 19(1): 132-141.
[8] 王伟, 刘辉, 杨俊安. 一种特征字典映射的图像盲评价方法研究[J]. 智能系统学报, 2018, 13(6): 989-993.
WANG Wei, LIU Hui, YANG Jun’an. Blind quality evaluation with image features codebook mapping[J]. CAAI transactions on intelligent systems, 2018, 13(6): 989-993.
[9] 王成, 刘坤, 杜砾. 全参考图像质量指标评价分析[J]. 现代电子技术, 2023, 46(21): 39-43.
WANG Cheng, LIU Kun, DU Li. Evaluation and analysis of full reference image quality indicators[J]. Modern electronics technique, 2023, 46(21): 39-43.
[10] WANG Zhou, BOVIK A C, SHEIKH H R, et al. Image quality assessment: from error visibility to structural similarity[J]. IEEE transactions on image processing, 2004, 13(4): 600-612.
[11] SAMPAT M P, WANG Zhou, GUPTA S, et al. Complex wavelet structural similarity: a new image similarity index[J]. IEEE transactions on image processing, 2009, 18(11): 2385-2401.
[12] WANG Z, SIMONCELLI E P, BOVIK A C. Multiscale structural similarity for image quality assessment[C]//The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers. Pacific Grove: IEEE, 2003: 1398-1402.
[13] XUE Wufeng, ZHANG Lei, MOU Xuanqin, et al. Gradient magnitude similarity deviation: a highly efficient perceptual image quality index[J]. IEEE transactions on image processing, 2014, 23(2): 684-695.
[14] ZHANG Lin, SHEN Ying, LI Hongyu. VSI: a visual saliency-induced index for perceptual image quality assessment[J]. IEEE transactions on image processing, 2014, 23(10): 4270-4281.
[15] KIM J, LEE S. Deep learning of human visual sensitivity in image quality assessment framework[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 1969-1977.
[16] PRASHNANI E, CAI Hong, MOSTOFI Y, et al. PieAPP: perceptual image-error assessment through pairwise preference[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 1808-1817.
[17] ZHANG R, ISOLA P, EFROS A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 586-595.
[18] DING Keyan, MA Kede, WANG Shiqi, et al. Image quality assessment: unifying structure and texture similarity[J]. IEEE transactions on pattern analysis and machine intelligence, 2022, 44(5): 2567-2581.
[19] SEO S, KI S, KIM M. A novel just-noticeable-difference-based saliency-channel attention residual network for full-reference image quality predictions[J]. IEEE transactions on circuits and systems for video technology, 2021, 31(7): 2602-2616.
[20] GAO Fei, WANG Yi, LI Panpeng, et al. DeepSim: Deep similarity for image quality assessment[J]. Neurocomputing, 2017, 257: 104-114.
[21] WU Jinjian, MA Jupo, LIANG Fuhu, et al. End-to-end blind image quality prediction with cascaded deep neural network[J]. IEEE transactions on image processing, 2020, 29: 7414-7426.
[22] BOSSE S, MANIRY D, MüLLER K R, et al. Deep neural networks for No-reference and full-reference image quality assessment[J]. IEEE transactions on image processing, 2018, 27(1): 206-219.
[23] CHEON M, YOON S J, KANG B, et al. Perceptual image quality assessment with transformers[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Nashville: IEEE, 2021: 433-442.
[24] VARGA D. No-reference image quality assessment using the statistics of global and local image features[J]. Electronics, 2023, 12(7): 1615.
[25] VARGA D. No-reference quality assessment of authentically distorted images based on local and global features[J]. Journal of imaging, 2022, 8(6): 173.
[26] LAO Shanshan, GONG Yuan, SHI Shuwei, et al. Attentions help CNNs see better: attention-based hybrid image quality assessment network[C]//2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. New Orleans: IEEE, 2022: 1139-1148.
[27] MA Kede, LIU Wentao, ZHANG Kai, et al. End-to-end blind image quality assessment using deep neural networks[J]. IEEE transactions on image processing, 2018, 27(3): 1202-1213.
[28] YUAN Li, CHEN Yunpeng, WANG Tao, et al. Tokens-to-token ViT: training vision transformers from scratch on ImageNet[C]//2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 538-547.
[29] 毛明毅, 吴晨, 钟义信, 等. 加入自注意力机制的BERT命名实体识别模型[J]. 智能系统学报, 2020, 15(4): 772-779.
MAO Mingyi, WU Chen, ZHONG Yixin, et al. BERT named entity recognition model with self-attention mechanism[J]. CAAI transactions on intelligent systems, 2020, 15(4): 772-779.
[30] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16×6 words: Transformers for image recognition at scale[EB/OL]. (2020-10-22)[2024-07-24].https://arxiv.org/abs/2010.11929.
[31] WANG Hao, ZHANG Yue, LIU Chao, et al. sEMG based hand gesture recognition with deformable convolutional network[J]. International journal of machine learning and cybernetics, 2022, 13(6): 1729-1738.
[32] SHI Shuwei, BAI Qingyan, CAO Mingdeng, et al. Region-adaptive deformable network for image quality assessment[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. Nashville: IEEE, 2021: 324-333.
[33] ZHANG Haochen, LIU Dong, XIONG Zhiwei. Convolutional neural network-based video super-resolution for action recognition[C]//2018 13th IEEE International Conference on Automatic Face & Gesture Recognition. Xi’an: IEEE, 2018: 746-750.
[34] ZHANG Weixia, MA Kede, YAN Jia, et al. Blind image quality assessment using a deep bilinear convolutional neural network[J]. IEEE transactions on circuits and systems for video technology, 2020, 30(1): 36-47.
[35] 刘扬, 王立虎, 杨礼波, 等. 改进EEMD-GRU混合模型在径流预报中的应用[J]. 智能系统学报, 2022, 17(3): 480-487.
LIU Yang, WANG Lihu, YANG Libo, et al. Application of improved EMD-GRU hybrid model in runoff forecasting[J]. CAAI transactions on intelligent systems, 2022, 17(3): 480-487.
[36] GU Jinjin, CAI Haoming, CHEN Haoyu, et al. PIPAL: a large-scale image quality assessment dataset for perceptual image restoration[M]//Computer Vision-ECCV 2020. Cham: Springer International Publishing, 2020: 633-651.
[37] LAPARRA V, BALLé J, BERARDINO A, et al. Perceptual image quality assessment using a normalized Laplacian pyramid[J]. Electronic imaging, 2016, 28(16): 1-6.
[38] CHANDLER D M. Most apparent distortion: full-reference image quality assessment and the role of strategy[J]. Journal of electronic imaging, 2010, 19(1): 011006.
[39] SHEIKH H R, BOVIK A C. Image information and visual quality[J]. IEEE transactions on image processing, 2006, 15(2): 430-444.
[40] ZHANG Lin, ZHANG Lei, MOU Xuanqin, et al. FSIM: a feature similarity index for image quality assessment[J]. IEEE transactions on image processing, 2011, 20(8): 2378-2386.
[41] CHEN Chaofeng, MO Jiadi, HOU Jingwen, et al. TOPIQ: a top-down approach from semantics to distortions for image quality assessment[J]. IEEE transactions on image processing, 2024, 33: 2404-2418.
相似文献/References:
[1]张媛媛,霍静,杨婉琪,等.深度信念网络的二代身份证异构人脸核实算法[J].智能系统学报,2015,10(2):193.[doi:10.3969/j.issn.1673-4785.201405060]
 ZHANG Yuanyuan,HUO Jing,YANG Wanqi,et al.A deep belief network-based heterogeneous face verification method for the second-generation identity card[J].CAAI Transactions on Intelligent Systems,2015,10():193.[doi:10.3969/j.issn.1673-4785.201405060]
[2]丁科,谭营.GPU通用计算及其在计算智能领域的应用[J].智能系统学报,2015,10(1):1.[doi:10.3969/j.issn.1673-4785.201403072]
 DING Ke,TAN Ying.A review on general purpose computing on GPUs and its applications in computational intelligence[J].CAAI Transactions on Intelligent Systems,2015,10():1.[doi:10.3969/j.issn.1673-4785.201403072]
[3]马晓,张番栋,封举富.基于深度学习特征的稀疏表示的人脸识别方法[J].智能系统学报,2016,11(3):279.[doi:10.11992/tis.201603026]
 MA Xiao,ZHANG Fandong,FENG Jufu.Sparse representation via deep learning features based face recognition method[J].CAAI Transactions on Intelligent Systems,2016,11():279.[doi:10.11992/tis.201603026]
[4]刘帅师,程曦,郭文燕,等.深度学习方法研究新进展[J].智能系统学报,2016,11(5):567.[doi:10.11992/tis.201511028]
 LIU Shuaishi,CHENG Xi,GUO Wenyan,et al.Progress report on new research in deep learning[J].CAAI Transactions on Intelligent Systems,2016,11():567.[doi:10.11992/tis.201511028]
[5]马世龙,乌尼日其其格,李小平.大数据与深度学习综述[J].智能系统学报,2016,11(6):728.[doi:10.11992/tis.201611021]
 MA Shilong,WUNIRI Qiqige,LI Xiaoping.Deep learning with big data: state of the art and development[J].CAAI Transactions on Intelligent Systems,2016,11():728.[doi:10.11992/tis.201611021]
[6]王亚杰,邱虹坤,吴燕燕,等.计算机博弈的研究与发展[J].智能系统学报,2016,11(6):788.[doi:10.11992/tis.201609006]
 WANG Yajie,QIU Hongkun,WU Yanyan,et al.Research and development of computer games[J].CAAI Transactions on Intelligent Systems,2016,11():788.[doi:10.11992/tis.201609006]
[7]黄心汉.A3I:21世纪科技之光[J].智能系统学报,2016,11(6):835.[doi:10.11992/tis.201605022]
 HUANG Xinhan.A3I: the star of science and technology for the 21st century[J].CAAI Transactions on Intelligent Systems,2016,11():835.[doi:10.11992/tis.201605022]
[8]宋婉茹,赵晴晴,陈昌红,等.行人重识别研究综述[J].智能系统学报,2017,12(6):770.[doi:10.11992/tis.201706084]
 SONG Wanru,ZHAO Qingqing,CHEN Changhong,et al.Survey on pedestrian re-identification research[J].CAAI Transactions on Intelligent Systems,2017,12():770.[doi:10.11992/tis.201706084]
[9]杨梦铎,栾咏红,刘文军,等.基于自编码器的特征迁移算法[J].智能系统学报,2017,12(6):894.[doi:10.11992/tis.201706037]
 YANG Mengduo,LUAN Yonghong,LIU Wenjun,et al.Feature transfer algorithm based on an auto-encoder[J].CAAI Transactions on Intelligent Systems,2017,12():894.[doi:10.11992/tis.201706037]
[10]王科俊,赵彦东,邢向磊.深度学习在无人驾驶汽车领域应用的研究进展[J].智能系统学报,2018,13(1):55.[doi:10.11992/tis.201609029]
 WANG Kejun,ZHAO Yandong,XING Xianglei.Deep learning in driverless vehicles[J].CAAI Transactions on Intelligent Systems,2018,13():55.[doi:10.11992/tis.201609029]
[11]赵文清,许丽娇,陈昊阳,等.多层特征融合与语义增强的盲图像质量评价[J].智能系统学报,2024,19(1):132.[doi:10.11992/tis.202301007]
 ZHAO Wenqing,XU Lijiao,CHEN Haoyang,et al.Blind image quality assessment based on multi-level feature fusion and semantic enhancement[J].CAAI Transactions on Intelligent Systems,2024,19():132.[doi:10.11992/tis.202301007]

备注/Memo

收稿日期:2024-7-24。
基金项目:国家自然科学基金青年科学基金项目(62303184);国家自然科学基金联合基金项目重点支持项目(U21A20486);国家自然科学基金面上项目(62373151);河北省自然科学基金青年科学基金项目(2024502006);中央高校基本科研业务费专项(2023JC006, 2024MS136, 2024MS138).
作者简介:王亚茹,博士,讲师,主要研究方向为模式识别与计算机视觉、数据挖掘和电力视觉。发表学术论文10余篇。E-mail:wangyaru@ncepu.edu.cn;杨春旺,硕士研究生,主要研究方向为图像质量评价。E-mail:2312661795@qq.com;张诗吟,讲师,博士,主要研究方向为计算机视觉和图像处理。发表学术论文5篇。E-mail:shiyinzhang@ncepu.edu.cn。
通讯作者:张诗吟. E-mail:shiyinzhang@ncepu.edu.cn

更新日期/Last Update: 1900-01-01
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com