[1]凌彤,杨琬琪,杨明.利用多模态U形网络的CT图像前列腺分割[J].智能系统学报,2018,13(06):981-988.[doi:10.11992/tis.201806012]
 LING Tong,YANG Wanqi,YANG Ming.Prostate segmentation in CT images with multimodal U-net[J].CAAI Transactions on Intelligent Systems,2018,13(06):981-988.[doi:10.11992/tis.201806012]
点击复制

利用多模态U形网络的CT图像前列腺分割(/HTML)
分享到:

《智能系统学报》[ISSN:1673-4785/CN:23-1538/TP]

卷:
第13卷
期数:
2018年06期
页码:
981-988
栏目:
出版日期:
2018-10-25

文章信息/Info

Title:
Prostate segmentation in CT images with multimodal U-net
作者:
凌彤 杨琬琪 杨明
南京师范大学 计算机科学与技术学院, 江苏 南京 210023
Author(s):
LING Tong YANG Wanqi YANG Ming
School of Computer Science and Technology, Nanjing Normal University, Nanjing 210023, China
关键词:
计算机断层扫描图像核磁共振成像深度学习多模态U形网络单模态U形网络迁移学习损失函数前列腺分割
Keywords:
CTMRIdeep learningmultimodal U-netsingle modal U-nettransfer learningloss functionprostate segmentation
分类号:
TP18;R318
DOI:
10.11992/tis.201806012
摘要:
计算机断层扫描(computed tomography,CT)可应用于前列腺癌的检查诊断,但是它对软组织结构对比度不高,因此很难从中分割病变;而核磁共振成像(nuclear magnetic resonance imaging,MRI)具有较高的对比度,能为病变提供丰富的影像信息。为了提升CT图像的前列腺分割精度,本文提出一种新的基于深度学习的多模态U形网络图像分割模型MM-unet,充分运用MRI图像与CT图像间信息互补的特点。具体地,首先运用迁移学习思想分别训练MRI与CT图像的初始分割模型,然后通过设计一种新型的多模态损失函数MM-Loss,建立不同模态分割模型之间的联系,联合训练基于MRI与CT图像的MM-unet。为验证所提模型MM-unet的有效性,我们在某合作医院提供的Prostate数据集上进行了实验,实验结果表明,与U-net方法相比,MM-unet能够获得高出3个百分点Dice的CT图像分割精度。
Abstract:
Computed tomography (CT) can be applied to prostate cancer diagnosis; however, it is not effective for the visualization of soft tissues structures because of the resulting low contrast, and thus, it is difficult to perform accurate prostate segmentation in CT images. Contrarily, nuclear magnetic resonance imaging (MRI) provides a relatively high contrast to soft tissues, which can provide rich image information for prostate segmentation. To improve the accuracy of prostate segmentation in CT images, a novel multimodal U-net (MM-unet) is proposed based on deep learning, which fully utilizes the complementary information between MRI and CT images. A transfer learning method is first applied to train the initial segmentation model parameters for segmenting MRI and CT images, and then a novel multimodal loss function MM-Loss is proposed to connect the segmentation models between different modalities, jointly training the proposed MM-unet in paired MRI and CT images. To validate the effectiveness of the proposed MM-unet, we carried out experiments on the prostate dataset provided by our allied hospital. The experimental results show that MM-unet can achieve 3% higher Dice than U-net for prostate segmentation in CT images.

参考文献/References:

[1] 黄高贤. 前列腺癌中CT、MR诊断的研究进展[J]. 吉林医学, 2010, 31(4):529-531 HUANG Gaoxian. Progress in diagnosis of CT and MR in prostate cancer[J]. Jilin medical journal, 2010, 31(4):529-531
[2] 陈识, 宋曼. MRI与CT诊断不同病理分期前列腺癌患者的准确率比较观察[J]. 中国老年保健医学, 2016, 14(6):85-87 CHEN Shi, SONG Man. MRI and CT diagnosis of different pathological stages of prostate cancer patients’ accuracy rate comparison[J]. Chinese journal of geriatric care, 2016, 14(6):85-87
[3] LIAO Shu, SHEN Dinggang. A feature-based learning framework for accurate prostate localization in CT images[J]. IEEE transactions on image processing, 2012, 21(8):3546-3559.
[4] CHEN Siqi, LOVELOCK D M, RADKE R J. Segmenting the prostate and rectum in CT imagery using anatomical constraints[J]. Medical image analysis, 2011, 15(1):1-11.
[5] GAO Yaozong, LIAO Shu, SHEN Dinggang. Prostate segmentation by sparse representation based classification[J]. Medical physics, 2012, 39(10):6372-6387.
[6] SHI Jun, ZHENG Xiao, LI Yan, et al. Multimodal neuroimaging feature learning with multimodal stacked deep polynomial networks for diagnosis of Alzheimer’s disease[J]. IEEE journal of biomedical and health informatics, 2018, 22(1):173-183.
[7] CAI Yunliang, LANDIS M, LAIDLEY D T, et al. Multi-modal vertebrae recognition using transformed deep convolution network[J]. Computerized medical imaging and graphics, 2016, 51:11-19.
[8] HINTON G E, OSINDERO S, THE Y W. A fast learning algorithm for deep belief nets[J]. Neural computation, 2006, 18(7):1527-1554.
[9] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, 2012:1097-1105.
[10] KOREZ R, LIKAR B, PERNU? F, et al. Model-based segmentation of vertebral bodies from MR images with 3D CNNs[C]//Proceedings of the 19th International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens, Greece, 2016:433-441.
[11] XIE Yuanpu, ZHANG Zizhao, SAPKOTA M, et al. Spatial clockwork recurrent neural network for muscle perimysium segmentation[C]//Proceedings of the 19th International Conference on Medical Image Computing and Computer-assisted Intervention. Athens, Greece, 2016:185-193.
[12] BROSCH T, TANG L Y W, YOO Y, et al. Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation[J]. IEEE transactions on medical imaging, 2016, 35(5):1229-1239.
[13] YOSINSKI J, CLUNE J, BENGIO Y, et al. How transferable are features in deep neural networks?[J]. arXiv:1411.1792, 2014.
[14] SHI Yinghuan, YANG Wanqi, GAO Yang, et al. Does manual delineation only provide the side information in CT prostate segmentation?[C]//Proceedings of the 20th International Conference on Medical Image Computing and Computer-Assisted Intervention. Quebec City, Canada, 2017:692-700.
[15] KOUSHIK J. Understanding convolutional neural networks[J]. arXiv:1605.09081, 2016.
[16] XU Bing, WANG Naiyan, CHEN Tianqi, et al. Empirical evaluation of rectified activations in convolutional network[J]. arXiv:1505:00853, 2015.
[17] ZEILER M D, TAYLOR G W, FERGUS R. Adaptive deconvolutional networks for mid and high level feature learning[C]//Proceedings of the 2011 International Conference on Computer Vision. Barcelona, Spain, 2011:2018-2025.
[18] RONNEBERGER O, FISCHER P, BROX T. U-Net:convolutional networks for biomedical image segmentation[C]//Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich, Germany, 2015:234-241.
[19] IOFFE S, SZEGEDY C. Batch normalization:accelerating deep network training by reducing internal covariate shift[J]. arXiv:1502.03167, 2015.
[20] LIN Min, CHEN Qiang, YAN Shuicheng. Network in network[J]. arXiv:1312.4400, 2013.
[21] FRANCE N. MICCAI grand challenge:prostate MR image segmentation 2012[EB/OL].[2012-10-01]. https://promise12.grand-challenge.org/home/.
[22] JIA Yangqing, SHELHAMER E, DONAHUE J, et al. Caffe:convolutional architecture for fast feature embedding[C]//Proceedings of the 22nd ACM International Conference on Multimedia. Florida, USA, 2014:675-678.
[23] KETKAR N. Stochastic gradient descent[M]//KETKAR N. Deep Learning with Python. Berkeley:Apress, 2017.
[24] TAHA A A, HANBURY A. Metrics for evaluating 3D medical image segmentation:analysis, selection, and tool[J]. BMC medical imaging, 2015, 15:29.

备注/Memo

备注/Memo:
收稿日期:2018-06-06。
基金项目:国家自然科学基金项目(61603193,61432008);江苏省自然科学基金面上项目(BK20171479);江苏省博士后科学基金(1701157B).
作者简介:凌彤,女,1994年生,硕士研究生,主要研究方向为深度学习、医学图像分割。;杨琬琪,女,1988年生,讲师,主要研究方向为机器学习、模式识别、计算机视觉。近年来在IEEE TNNLS、IEEE TCyb、CVIU、IJCAI、ACM MM、MICCAI等国际学术期刊和学术会议上发表论文16篇,均被SCI、EI检索;杨明,男,1964年生,教授,博士生导师,主要研究方向为数据库技术与数据挖掘技术、模式识别、机器学习。授权国家发明专利3项。发表学术论文100余篇,其中被SCI、EI、ISTP检索60余篇。
通讯作者:杨琬琪.E-mail:yangwq@njnu.edu.cn
更新日期/Last Update: 2018-12-25