[1]向许,于洪,张晓霞,等.IsomapVSG-LIME:一种新的模型无关解释方法[J].智能系统学报,2023,18(4):841-848.[doi:10.11992/tis.202209010]
XIANG Xu,YU Hong,ZHANG Xiaoxia,et al.IsomapVSG-LIME: a novel local interpretable model-agnostic explanations[J].CAAI Transactions on Intelligent Systems,2023,18(4):841-848.[doi:10.11992/tis.202209010]
点击复制
《智能系统学报》[ISSN 1673-4785/CN 23-1538/TP] 卷:
18
期数:
2023年第4期
页码:
841-848
栏目:
人工智能院长论坛
出版日期:
2023-07-15
- Title:
-
IsomapVSG-LIME: a novel local interpretable model-agnostic explanations
- 作者:
-
向许, 于洪, 张晓霞, 王国胤
-
重庆邮电大学 计算智能重庆市重点实验室, 重庆 400065
- Author(s):
-
XIANG Xu, YU Hong, ZHANG Xiaoxia, WANG Guoyin
-
Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
-
- 关键词:
-
局部可解释模型无关的解释; 机器学习; 等距映射虚拟样本生成; 凝聚层次聚类; 稳定性; 局部忠实性; 随机扰动采样; 特征序列稳定性指数
- Keywords:
-
local interpretable model-agnostic explanations(LIME); machine learning; IsomapVSG; hierarchical agglomerative clustering; stability; local fidelity; random perturbation sampling; features sequence stability index(FSSI)
- 分类号:
-
TP181
- DOI:
-
10.11992/tis.202209010
- 摘要:
-
为了解决局部可解释模型无关的解释(local interpretable model-agnostic explanations, LIME)随机扰动采样方法导致产生的解释缺乏局部忠实性和稳定性的问题,本文提出了一种新的模型无关解释方法IsomapVSG-LIME。该方法使用基于流形学习的等距映射虚拟样本生成 (isometric mapping virtual sample generation, IsomapVSG) 方法代替LIME的随机扰动采样方法来生成样本,并使用凝聚层次聚类方法从虚拟样本中选择具有代表性的样本用以训练解释模型;本文还提出了一种新的解释稳定性评价指标—特征序列稳定性指数 (features sequence stability index, FSSI),解决了以往评价指标忽略特征的序关系和解释翻转的问题。实验结果表明,本文提出的方法在稳定性和局部忠实性上均优于现有的最新模型。
- Abstract:
-
In order to solve the problem of lacking local fidelity and stability caused by local interpretable model-agnostic explanations (LIME) random perturbation sampling method, a new local interpretable model-agnostic explanation, IsomapVSG-LIME is proposed in this paper. In this method, isometric mapping virtual sample generation (IsomapVSG), a virtual sample generation method based on manifold learning, is used in substitution of random perturbation sampling method of LIME to generate samples, and aggregation hierarchical clustering method is used to select representative samples from virtual samples for training explanation model. In addition, this paper also proposes a new explanation stability evaluation index, the features sequence stability index (FSSI), which solves the problem that previous evaluation indexes ignore the sequential relationship of features and the flipping of explanations. Experimental results show that the proposed method outperforms the latest models in terms of stability and local fidelity.
备注/Memo
收稿日期:2022-09-06。
基金项目:国家自然科学基金项目(62136002,61876027);重庆英才计划项目(cstc2022ycjh-bgzxm0004).
作者简介:向许,硕士研究生,主要研究方向为可解释机器学习;于洪,教授,博士生导师,主要研究方向为三支决策、粗糙集、粒计算、认知计算、聚类分析和可信人工智能。主持国家自然科学基金项目10余项。发表学术论文100余篇,出版专著5部;王国胤,教授,博士生导师,国家级人才,重庆邮电大学副校长,主要研究方向为粗糙集、粒计算、数据挖掘、认知计算、大数据、人工智能。授权发明专利20项。发表学术论文300余篇,出版专著23部。
通讯作者:于洪.E-mail:yuhong@cqupt.edu.cn
更新日期/Last Update:
1900-01-01