[1]刘国奇,陈宗玉,刘栋,等.融合边界注意力的特征挖掘息肉小目标网络[J].智能系统学报,2024,19(5):1092-1101.[doi:10.11992/tis.202306025]
 LIU Guoqi,CHEN Zongyu,LIU Dong,et al.A small polyp objects network integrating boundary attention features[J].CAAI Transactions on Intelligent Systems,2024,19(5):1092-1101.[doi:10.11992/tis.202306025]
点击复制

融合边界注意力的特征挖掘息肉小目标网络

参考文献/References:
[1] XI Yue, XU Pengfei. Global colorectal cancer burden in 2020 and projections to 2040[J]. Translational oncology, 2021, 14(10): 101174.
[2] KOLLIGS F T. Diagnostics and epidemiology of colorectal cancer[J]. Visceral medicine, 2016, 32(3): 158-164.
[3] DAI Ying, CHEN Weimin, XU Xuanfu, et al. Factors affecting adenoma risk level in patients with intestinal polyp and association analysis[J]. Journal of healthcare engineering, 2022: 9479563.
[4] LI Zewen, LIU Fan, YANG Wenjie, et al. A survey of convolutional neural networks: analysis, applications, and prospects[J]. IEEE transactions on neural networks and learning systems, 2022, 33(12): 6999-7019.
[5] SARVAMANGALA D R, KULKARNI R V. Convolutional neural networks in medical image understanding: a survey[J]. Evolutionary intelligence, 2022, 15(1): 1-22.
[6] RONNEBERGER O, FISCHER P , BROX T. U-Net: convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2015: 234-241.
[7] ZHOU Zongwei, RAHMAN SIDDIQUEE M M, TAJBAKHSH N, et al. UNet++: a nested U-net architecture for medical image segmentation[C]//Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Cham: Springer, 2018: 3-11.
[8] HUANG Huimin, LIN Lanfen, TONG Ruofeng, et al. UNet 3: a full-scale connected UNet for medical image segmentation[C]//2020 IEEE International Conference on Acoustics, Speech and Signal Processing. Barcelona: IEEE, 2020: 1055-1059.
[9] ZHANG Zhengxin, LIU Qingjie, WANG Yunhong. Road extraction by deep residual U-net[J]. IEEE geoscience and remote sensing letters, 2018, 15(5): 749-753.
[10] JHA D, SMEDSRUD P H, RIEGLER M A, et al. ResUNet: an advanced architecture for medical image segmentation[C]//2019 IEEE International Symposium on Multimedia. San Diego: IEEE, 2019: 225-2255.
[11] TOMAR N K, JHA D, BAGCI U. DilatedSegNet: a deep dilated segmentation network for Polyp segmentation[C]//MultiMedia Modeling. Cham: Springer, 2023: 334-344.
[12] TOMAR N K, SRIVASTAVA A, BAGCI U, et al. Automatic polyp segmentation with multiple kernel dilated convolution network[C]//2022 IEEE 35th International Symposium on Computer-Based Medical Systems. Shenzen: IEEE, 2022: 317-322.
[13] ZHANG Wenchao, FU Chong, ZHENG Yu, et al. HSNet: a hybrid semantic network for polyp segmentation[J]. Computers in biology and medicine, 2022, 150: 106173.
[14] DONG Bo, WANG Wenhai, FAN Dengping, et al. Polyp-PVT: polyp segmentation with pyramid vision transformers[J]. CAAI artificial intelligence research, 2023: 9150015.
[15] LI Weisheng, ZHAO Yinghui, LI Feiyan, et al. MIA-Net: multi-information aggregation network combining transformers and convolutional feature learning for polyp segmentation[J]. Knowledge-based systems, 2022, 247: 108824.
[16] WU Huisi, CHEN Shihuai, CHEN Guilian, et al. FAT-Net: feature adaptive transformers for automated skin lesion segmentation[J]. Medical image analysis, 2022, 76: 102327.
[17] ZHANG Yundong, LIU Huiye, HU Qiang. TransFuse: fusing transformers and CNNs for medical image segmentation[C]//Medical Image Computing and Computer Assisted Intervention. Cham: Springer, 2021: 14-24.
[18] LIU Guoqi, WANG Jiajia, LIU Dong, et al. A multiscale nonlocal feature extraction network for breast lesion segmentation in ultrasound images[J]. IEEE transactions on instrumentation and measurement, 2023, 72: 1-12.
[19] ZHANG Ruifei, LI Guanbin, LI Zhen, et al. Adaptive context selection for polyp segmentation[C]//Medical Image Computing and Computer Assisted Intervention. Cham: Springer, 2020: 253–262.
[20] FAN Dengping, JI Gepeng, ZHOU Tao, et al. PraNet: parallel reverse attention network for polyp segmentation[C]//Medical Image Computing and Computer Assisted Intervention. Cham: Springer, 2020: 263-273.
[21] LOU Ange, GUAN Shuyue, KO H, et al. CaraNet: context axial reverse attention network for segmentation of small medical objects[C]//Medical Imaging 2022: Image Processing. San Diego: SPIE, 2022: 81-92.
[22] DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: transformers for image recognition at scale[EB/OL]. (2020–10–22)[2023-6-13]. http://arxiv.org/abs/2010.11929.
[23] ZHENG Sixiao, LU Jiachen, ZHAO Hengshuang, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 6877-6886.
[24] TOUVRON H, CORD M, DOUZE M, et al. Training data-efficient image transformers & distillation through attention[EB/OL]. (2020–12–23)[2023-06-13]. https://arxiv.org/abs/2012.12877.
[25] 汪鹏程, 张波涛, 顾进广. 融合多尺度门控卷积和窗口注意力的结肠息肉分割[J]. 计算机系统应用, 2024, 33(6): 70-80.
WANG Pengcheng, ZHANG Botao, GU Jinguang. Colon polyp segmentation fusing multi-scale gate convolution and window attention[J]. Computer systems and applications, 2024, 33(6): 70-80.
[26] QIN Xuebin, ZHANG Zichen, HUANG Chenyang, et al. BASNet: boundary-aware salient object detection[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 7471-7481.
[27] VáZQUEZ D, BERNAL J, SáNCHEZ F J, et al. A benchmark for endoluminal scene segmentation of colonoscopy images[J]. Journal of healthcare engineering, 2017: 4037190.
[28] BERNAL J, SáNCHEZ F J, FERNáNDEZ-ESPARRACH G, et al. WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs. saliency maps from physicians[J]. Computerized medical imaging and graphics, 2015, 43: 99-111.
[29] POGORELOV K, KRISTIN R R, GRIWODZ C, et al. Kvasir: A multi-class image dataset for computer aided gastrointestinal disease detection[C]//Proceedings of the 8th ACM on Multimedia Systems Conference. Taipei: ACM, 2017: 164-169.
[30] BERNAL J, SáNCHEZ J, VILARI?O F. Towards automatic polyp detection with a polyp appearance model[J]. Pattern recognition, 2012, 45(9): 3166-3182.
[31] SILVA J, HISTACE A, ROMAIN O, et al. Toward embedded detection of polyps in WCE images for early diagnosis of colorectal cancer[J]. International journal of computer assisted radiology and surgery, 2014, 9(2): 283-293.
[32] GAO Shanghua, CHENG Mingming, ZHAO Kai, et al. Res2Net: a new multi-scale backbone architecture[J]. IEEE transactions on pattern analysis and machine intelligence, 2021, 43(2): 652-662.
相似文献/References:
[1]郭一楠,王斌,巩敦卫,等.实体结构与语义融合的多层注意力知识表示学习[J].智能系统学报,2023,18(3):577.[doi:10.11992/tis.202204026]
 GUO Yinan,WANG Bin,GONG Dunwei,et al.Multi-layer attention knowledge representation learning by integrating entity structure with semantics[J].CAAI Transactions on Intelligent Systems,2023,18():577.[doi:10.11992/tis.202204026]
[2]周静,胡怡宇,黄心汉.形状补全引导的Transformer点云目标检测方法[J].智能系统学报,2023,18(4):731.[doi:10.11992/tis.202210038]
 ZHOU Jing,HU Yiyu,HUANG Xinhan.Shape completion-guided Transformer point cloud object detection method[J].CAAI Transactions on Intelligent Systems,2023,18():731.[doi:10.11992/tis.202210038]
[3]张少乐,雷涛,王营博,等.基于多尺度金字塔Transformer的人群计数方法[J].智能系统学报,2024,19(1):67.[doi:10.11992/tis.202304044]
 ZHANG Shaole,LEI Tao,WANG Yingbo,et al.A crowd counting network based on multi-scale pyramid Transformer[J].CAAI Transactions on Intelligent Systems,2024,19():67.[doi:10.11992/tis.202304044]
[4]程艳,胡建生,赵松华,等.融合Transformer和交互注意力网络的方面级情感分类模型[J].智能系统学报,2024,19(3):728.[doi:10.11992/tis.202303016]
 CHENG Yan,HU Jiansheng,ZHAO Songhua,et al.Aspect-level sentiment classification model combining Transformer and interactive attention network[J].CAAI Transactions on Intelligent Systems,2024,19():728.[doi:10.11992/tis.202303016]
[5]邵凯,王明政,王光宇.基于Transformer的多尺度遥感语义分割网络[J].智能系统学报,2024,19(4):920.[doi:10.11992/tis.202304026]
 SHAO Kai,WANG Mingzheng,WANG Guangyu.Transformer-based multiscale remote sensing semantic segmentation network[J].CAAI Transactions on Intelligent Systems,2024,19():920.[doi:10.11992/tis.202304026]
[6]刘万军,姜岚,曲海成,等.融合CNN与Transformer的MRI脑肿瘤图像分割[J].智能系统学报,2024,19(4):1007.[doi:10.11992/tis.202301016]
 LIU Wanjun,JIANG Lan,QU Haicheng,et al.MRI brain tumor image segmentation by fusing CNN and Transformer[J].CAAI Transactions on Intelligent Systems,2024,19():1007.[doi:10.11992/tis.202301016]
[7]丁贵广,陈辉,王澳,等.视觉深度学习模型压缩加速综述[J].智能系统学报,2024,19(5):1072.[doi:10.11992/tis.202311011]
 DING Guiguang,CHEN Hui,WANG Ao,et al.Review of model compression and acceleration for visual deep learning[J].CAAI Transactions on Intelligent Systems,2024,19():1072.[doi:10.11992/tis.202311011]
[8]郝剑龙,刘志斌,张宸,等.基于改进Transformer和超图模型的股票趋势预测方法研究[J].智能系统学报,2024,19(5):1126.[doi:10.11992/tis.202308017]
 HAO Jianlong,LIU Zhibin,ZHANG Chen,et al.Stock trend prediction method based on improved Transformer and hypergraph model[J].CAAI Transactions on Intelligent Systems,2024,19():1126.[doi:10.11992/tis.202308017]
[9]黄昱程,肖子旺,武丹凤,等.时空融合与判别力增强的孪生网络目标跟踪方法[J].智能系统学报,2024,19(5):1218.[doi:10.11992/tis.202306005]
 HUANG Yucheng,XIAO Ziwang,WU Danfeng,et al.Spatiotemporal fusion and discriminative augmentation for improved Siamese tracking[J].CAAI Transactions on Intelligent Systems,2024,19():1218.[doi:10.11992/tis.202306005]

备注/Memo

收稿日期:2023-6-13。
基金项目:国家自然科学基金项目(61901160, U1904123).
作者简介:刘国奇,副教授,博士,主要研究方向为图像分割、机器学习、偏微分方程。E-mail:gqliu@htu.edu.cn;陈宗玉,硕士研究生,主要研究方向为图像分割、机器学习。E-mail:chenzongyu1010@163.com;刘栋,教授,主要研究方向为教育数据挖掘和复杂网络分析。E-mail:liudong@htu.edu.cn。
通讯作者:刘国奇. E-mail:gqliu@htu.edu.cn

更新日期/Last Update: 2024-09-05
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com