[1]丁维昌,施俊,王骏.自监督对比特征学习的多模态乳腺超声诊断[J].智能系统学报,2023,18(1):66-74.[doi:10.11992/tis.202111052]
DING Weichang,SHI Jun,WANG Jun.Multi-modality ultrasound diagnosis of the breast with self-supervised contrastive feature learning[J].CAAI Transactions on Intelligent Systems,2023,18(1):66-74.[doi:10.11992/tis.202111052]
点击复制
《智能系统学报》[ISSN 1673-4785/CN 23-1538/TP] 卷:
18
期数:
2023年第1期
页码:
66-74
栏目:
学术论文—智能系统
出版日期:
2023-01-05
- Title:
-
Multi-modality ultrasound diagnosis of the breast with self-supervised contrastive feature learning
- 作者:
-
丁维昌, 施俊, 王骏
-
上海大学 通信与信息工程学院, 上海 200444
- Author(s):
-
DING Weichang, SHI Jun, WANG Jun
-
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
-
- 关键词:
-
自监督学习; 对比学习; 超声图像; 弹性超声; B型超声; 多模态; 乳腺癌; 计算机辅助诊断; 深度学习
- Keywords:
-
self-supervised learning; contrastive learning; ultrasound image; flexible ultrasound; B-mode ultrasound; multi-modality; breast cancer; computer-aided diagnosis; deep learning
- 分类号:
-
TP391
- DOI:
-
10.11992/tis.202111052
- 摘要:
-
超声图像的乳腺癌自动诊断具有重要的临床价值。然而,由于缺乏大量人工标注数据,构建高精度的自动诊断方法极具挑战。近年来,自监督对比学习在利用无标签自然图像产生具有辨别性和高度泛化性的特征方面展现出巨大潜力。然而,采用自然图像构建正负样本的方法在乳腺超声领域并不适用。为此,本文引入超声弹性图像(elastography ultrasound, EUS),利用超声图像的多模态特性,提出一种融合多模态信息的自监督对比学习方法。该方法采用同一病人的多模态超声图像构造正样本;采用不同病人的多模态超声图像构建负样本;基于模态一致性、旋转不变性和样本分离性来构建对比学习的目标学习准则。通过在嵌入空间中学习两种模态的统一特征表示,从而将EUS信息融入模型,提高了模型在下游B型超声分类任务中的表现。实验结果表明本文提出的方法能够在无标签的情况下充分挖掘多模态乳腺超声图像中的高阶语义特征,有效提高乳腺癌的诊断正确率。
- Abstract:
-
An automatic ultrasound-based diagnosis of breast cancer has important clinical value. However, high-precision automatic diagnosis methods are very difficult to construct because many labeled data are missing. Recently, self-supervised contrastive learning has shown great potential in using unlabeled natural images to generate discriminative and highly generalized features. However, this approach is not applicable to using natural images to construct positive and negative samples in the field of breast ultrasound. To this end, this work introduces the elastography ultrasound (EUS) image and proposes a self-supervised contrastive learning method integrating multimodal information based on the multimodal features of an EUS image. Specifically, positive and negative samples are constructed using multi-modality ultrasound images collected from the same and different patients, respectively. We construct the object learning criterion of contrastive learning based on modal consistency, rotation invariance, and sample separation. The EUS information is integrated into the model by learning the unified feature representation for both modalities in the embedding space, which improves model performance in the downstream B-mode ultrasound classification task. The experimental results show that our method can fully mine the high-level semantic features from unlabeled multimodal breast ultrasound images, thereby effectively improving the diagnosis accuracy of breast cancer.
备注/Memo
收稿日期:2021-11-29。
基金项目:上海市自然科学基金项目(20ZR1419900).
作者简介:丁维昌,硕士研究生,主要研究方向为深度学习、医学图像处理;施俊,教授,主要研究方向为医学图像分析与处理、模式识别;王骏,副教授,主要研究方向为机器学习、医学影像智能分析
通讯作者:王骏.E-mail:wangjun_shu@shu.edu.cn
更新日期/Last Update:
1900-01-01