[1]DING Weichang,SHI Jun,WANG Jun.Multi-modality ultrasound diagnosis of the breast with self-supervised contrastive feature learning[J].CAAI Transactions on Intelligent Systems,2023,18(1):66-74.[doi:10.11992/tis.202111052]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
18
Number of periods:
2023 1
Page number:
66-74
Column:
学术论文—智能系统
Public date:
2023-01-05
- Title:
-
Multi-modality ultrasound diagnosis of the breast with self-supervised contrastive feature learning
- Author(s):
-
DING Weichang; SHI Jun; WANG Jun
-
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
-
- Keywords:
-
self-supervised learning; contrastive learning; ultrasound image; flexible ultrasound; B-mode ultrasound; multi-modality; breast cancer; computer-aided diagnosis; deep learning
- CLC:
-
TP391
- DOI:
-
10.11992/tis.202111052
- Abstract:
-
An automatic ultrasound-based diagnosis of breast cancer has important clinical value. However, high-precision automatic diagnosis methods are very difficult to construct because many labeled data are missing. Recently, self-supervised contrastive learning has shown great potential in using unlabeled natural images to generate discriminative and highly generalized features. However, this approach is not applicable to using natural images to construct positive and negative samples in the field of breast ultrasound. To this end, this work introduces the elastography ultrasound (EUS) image and proposes a self-supervised contrastive learning method integrating multimodal information based on the multimodal features of an EUS image. Specifically, positive and negative samples are constructed using multi-modality ultrasound images collected from the same and different patients, respectively. We construct the object learning criterion of contrastive learning based on modal consistency, rotation invariance, and sample separation. The EUS information is integrated into the model by learning the unified feature representation for both modalities in the embedding space, which improves model performance in the downstream B-mode ultrasound classification task. The experimental results show that our method can fully mine the high-level semantic features from unlabeled multimodal breast ultrasound images, thereby effectively improving the diagnosis accuracy of breast cancer.