[1]XIANG Xu,YU Hong,ZHANG Xiaoxia,et al.IsomapVSG-LIME: a novel local interpretable model-agnostic explanations[J].CAAI Transactions on Intelligent Systems,2023,18(4):841-848.[doi:10.11992/tis.202209010]
Copy

IsomapVSG-LIME: a novel local interpretable model-agnostic explanations

References:
[1] 纪守领, 李进锋, 杜天宇, 等. 机器学习模型可解释性方法、应用与安全研究综述[J]. 计算机研究与发展, 2019, 56(10): 2071–2096
JI Shouling, LI Jinfeng, DU Tianyu, et al. A review of interpretability methods, applications and security of machine learning models[J]. Journal of computer research and development, 2019, 56(10): 2071–2096
[2] 陈珂锐, 孟小峰. 机器学习的可解释性[J]. 计算机研究与发展, 2020, 57(9): 1971–1986
CHEN Kerui, MENG Xiaofeng. Interpretability of Machine Learning[J]. Journal of computer research and development, 2020, 57(9): 1971–1986
[3] MOLNAR C. Interpretable machine learning[M]. Raleigh: Lulu Press, 2019.
[4] 程国建,刘连宏. 机器学习的可解释性综述[J]. 智能计算机与应用, 2020, 10(5): 6–9
CHENG Guojian , LIU Lianhong. An overview of the inte rpre tability of machine learning[J]. Intelligent computer and applications, 2020, 10(5): 6–9
[5] RIBEIRO M T, SINGH S, GUESTRIN C. “why should I trust You?”: explaining the predictions of any classifier[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2016: 1135–1144.
[6] MODHUKUR V, SHARMA S, MONDAL M, et al. Machine learning approaches to classify primary and metastatic cancers using tissue of origin-based DNA methylation profiles[J]. Cancers, 2021, 13(15): 3768.
[7] PAN Pan, LI Yichao, XIAO Yongjiu, et al. Prognostic assessment of COVID-19 in the intensive care unit by machine learning methods: model development and validation[J]. Journal of medical internet research, 2020, 22(11): e23128.
[8] SCHULTEBRAUCKS K, CHOI K W, GALATZER-LEVY I R, et al. Discriminating heterogeneous trajectories of resilience and depression after major life stressors using polygenic scores[J]. JAMA psychiatry, 2021, 78(7): 744–752.
[9] FAN Yanghua, LI Dongfang, LIU Yifan, et al. Toward better prediction of recurrence for Cushing’s disease: a factorization-machine based neural approach[J]. International journal of machine learning and cybernetics, 2021, 12(3): 625–633.
[10] NóBREGA C, MARINHO L. Towards explaining recommendations through local surrogate models[C]//Proceedings of the 34th ACM/SIGAPP Symposium on Applied Computing. New York: ACM, 2019: 1671?1678.
[11] ZHU Fan, JIANG Min, QIU Yiming, et al. RSLIME: an efficient feature importance analysis approach for industrial recommendation systems[C]//2019 International Joint Conference on Neural Networks. Piscataway: IEEE, 2019: 1?6.
[12] DARIAN M, ONCHIS. Stable and explainable deep learning damage prediction for prismatic cantilever steel beam[J]. Computers in industry, 2021, 125: 103359.
[13] PANDEY P, RAI A, MITRA M. Explainable 1-D convolutional neural network for damage detection using Lamb wave[J]. Mechanical systems and signal processing, 2022, 164: 108220.
[14] GARREAU D, LUXBURG U. Explaining the explainer: A first theoretical analysis of LIME[C]//International Conference on Artificial Intelligence and Statistics. Palermo: PMLR, 2020: 1287?1296.
[15] SLACK D, HILGARD S, JIA E, et al. Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods[C]//Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. New York: ACM, 2020: 180?186.
[16] RAHNAMA A H A, BOSTR?M H. A study of data and label shift in the LIME framework[EB/OL].(2019-10-31)[2022-09-06].https://arxiv.org/abs/1910.14421.
[17] ZAFAR M R, KHAN N. Deterministic local interpretable model-agnostic explanations for stable explainability[J]. Machine learning and knowledge extraction, 2021, 3(3): 525–541.
[18] ZHAO Xingyu, HUANG Wei, HUANG Xiaowei, et al. Baylime: Bayesian local interpretable model-agnostic explanations[C]//Uncertainty in Artificial Intelligence.Toronto: PMLR, 2021: 887?896.
[19] SHANKARANARAYANA S M, RUNJE D. ALIME: autoencoder based approach for local interpretability[M]. Cham: Springer International Publishing, 2019: 454?463.
[20] ZHOU Zhengze, HOOKER G, WANG Fei. S-LIME: stabilized-LIME for model explanation[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. New York: ACM, 2021: 2429?2438.
[21] VISANI G, BAGLI E, CHESANI F, et al. Statistical stability indices for LIME: obtaining reliable explanations for machine learning models[J]. Journal of the operational research society, 2022, 73(1): 91–101.
[22] ZHANG Xiaohan, XU Yuan, HE Yanlin, et al. Novel manifold learning based virtual sample generation for optimizing soft sensor with small data[J]. ISA transactions, 2021, 109: 229–241.
[23] TENENBAUM J B, DE SILVA V, LANGFORD J C. A global geometric framework for nonlinear dimensionality reduction[J]. Science, 2000, 290(5500): 2319–2323.
[24] HUANG Guangbin, ZHU Qinyu, SIEW C K. Extreme learning machine: a new learning scheme of feedforward neural networks[C]//2004 IEEE International Joint Conference on Neural Networks. Piscataway: IEEE, 2005: 985?990.
[25] RASOULI P, YU I C. EXPLAN: explaining black-box classifiers using adaptive neighborhood generation[C]//2020 International Joint Conference on Neural Networks . Piscataway: IEEE, 2020: 1-9.
[26] BREIMAN L. Random forests[J]. Machine learning, 2001, 45(1): 5–32.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems