[1]顿家乐,王骏,彭汉琛,等.面向自闭症辅助诊断的知识蒸馏混合域适应方法[J].智能系统学报,2025,20(1):81-90.[doi:10.11992/tis.202403030]
DUN Jiale,WANG Jun,PENG Hanchen,et al.Blended domain adaptation for computer-aided diagnosis of autism through knowledge distillation[J].CAAI Transactions on Intelligent Systems,2025,20(1):81-90.[doi:10.11992/tis.202403030]
点击复制
《智能系统学报》[ISSN 1673-4785/CN 23-1538/TP] 卷:
20
期数:
2025年第1期
页码:
81-90
栏目:
学术论文—机器学习
出版日期:
2025-01-05
- Title:
-
Blended domain adaptation for computer-aided diagnosis of autism through knowledge distillation
- 作者:
-
顿家乐, 王骏, 彭汉琛, 李俊诚, 施俊
-
上海大学 通信与信息工程学院, 上海 200444
- Author(s):
-
DUN Jiale, WANG Jun, PENG Hanchen, LI Juncheng, SHI Jun
-
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
-
- 关键词:
-
自闭症谱系障碍; 领域自适应; 混合目标域; 知识蒸馏; 图卷积网络; 教师网络; 学生网络; 对抗学习
- Keywords:
-
autism spectrum disorder; domain adaptation; blended target domain; knowledge distillation; graph convolutional network; teacher network; student network; adversarial learning
- 分类号:
-
TP391
- DOI:
-
10.11992/tis.202403030
- 摘要:
-
使用领域自适应方法构建自闭症辅助诊断模型时,通常会面临目标域中混合了来自多个影像中心的样本的情况(即混合目标域),这使得目标域中包含了多个分布。传统领域自适应方法只能处理目标域包含单一分布的情况,而无法直接处理混合目标域的情况。为此,本文提出了一种基于知识蒸馏的混合目标领域自适应模型。具体地,将图卷积网络(graph convolutional network,GCN)作为教师模型,多层感知机(multilayer perceptron,MLP)作为学生模型。针对混合目标域数据分布的多样性,提出了一种新型的对抗知识蒸馏机制,通过对抗训练特征提取器和域鉴别器来减少源域和目标域之间的分布差异;与此同时,使用知识蒸馏,使教师模型在领域自适应的同时将知识传递给学生模型。在ABIDE数据集上验证了算法的有效性,本文方法一方面有效降低了网络的复杂度,另一方面,在混合目标域的分类准确率达到69.17%,与其他领域自适应方法相比效果更好。
- Abstract:
-
In the modeling of computer-aided diagnosis for autism spectrum disorder (ASD) across multiple centers with domain adaptation methods, unlabeled samples from multiple imaging centers are blended together in the target domain. Traditional domain adaptation methods lack the capability to address the clinical scenario of identifying ASD in blended-target domains. To this end, we propose a knowledge distillation blended-target domain adaptation model. Specifically, the graph convolutional network (GCN) is used as the teacher model and the multilayer perceptron (MLP) is used as the student model. To address distribution differences between source and target domains, a novel adversarial knowledge distillation mechanism is proposed to reduce the distribution difference by adversarially training feature extractors and domain discriminators. At the same time, knowledge distillation is used to enable the teacher model to transfer knowledge to the student model while achieving domain adaptation. The ABIDE dataset is employed to validate the effectiveness of the model. Our method not only reduces the complexity of the network but also achieves a classification accuracy of 69.17% in the blended target domains, surpassing other domain adaptation methods.
备注/Memo
收稿日期:2024-3-18。
基金项目:国家自然科学基金项目(62272289).
作者简介:顿家乐,硕士研究生,主要研究方向为深度学习、计算机视觉、迁移学习。E-mail: dunjiale1997@163.com。;王骏,副教授,博士。中国计算机学会高级会员,IEEE高级会员,中国人工智能学会粒计算与知识发现专业委员会委员、机器学习专业委员会委员,MICS online委员。主要研究方向为机器学习、医学影像智能计算。发表学术论文70余篇。E-mail:wangjun_ shu@shu.edu.cn。;彭汉琛,硕士研究生,主要研究方向为深度学习、迁移学习、图像处理。E-mail: phcking0219@163.com。
通讯作者:王骏. E-mail:wangjun_shu@shu.edu.cn
更新日期/Last Update:
2025-01-05