[1]LING Tong,YANG Wanqi,YANG Ming.Prostate segmentation in CT images with multimodal U-net[J].CAAI Transactions on Intelligent Systems,2018,13(6):981-988.[doi:10.11992/tis.201806012]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
13
Number of periods:
2018 6
Page number:
981-988
Column:
学术论文—机器感知与模式识别
Public date:
2018-10-25
- Title:
-
Prostate segmentation in CT images with multimodal U-net
- Author(s):
-
LING Tong; YANG Wanqi; YANG Ming
-
School of Computer Science and Technology, Nanjing Normal University, Nanjing 210023, China
-
- Keywords:
-
CT; MRI; deep learning; multimodal U-net; single modal U-net; transfer learning; loss function; prostate segmentation
- CLC:
-
TP18;R318
- DOI:
-
10.11992/tis.201806012
- Abstract:
-
Computed tomography (CT) can be applied to prostate cancer diagnosis; however, it is not effective for the visualization of soft tissues structures because of the resulting low contrast, and thus, it is difficult to perform accurate prostate segmentation in CT images. Contrarily, nuclear magnetic resonance imaging (MRI) provides a relatively high contrast to soft tissues, which can provide rich image information for prostate segmentation. To improve the accuracy of prostate segmentation in CT images, a novel multimodal U-net (MM-unet) is proposed based on deep learning, which fully utilizes the complementary information between MRI and CT images. A transfer learning method is first applied to train the initial segmentation model parameters for segmenting MRI and CT images, and then a novel multimodal loss function MM-Loss is proposed to connect the segmentation models between different modalities, jointly training the proposed MM-unet in paired MRI and CT images. To validate the effectiveness of the proposed MM-unet, we carried out experiments on the prostate dataset provided by our allied hospital. The experimental results show that MM-unet can achieve 3% higher Dice than U-net for prostate segmentation in CT images.