[1]XIAO Jianli,XU Dongzhou,WANG Hao,et al.Survey of large language models in healthcare[J].CAAI Transactions on Intelligent Systems,2025,20(3):530-547.[doi:10.11992/tis.202405003]
Copy

Survey of large language models in healthcare

References:
[1] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceeding of the 3lth International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000–6010.
[2] WOLF T, DEBUT L, SANH V, et al. Transformers: state-of-the-art natural language processing[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. Stroudsburg: Association for Computational Linguistics, 2020: 38-45.
[3] ZHANG Tianfu, HUANG Heyan, FENG Chong, et al. Enlivening redundant heads in multi-head self-attention for machine translation[C]//Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2021: 3238-3248.
[4] HAN Xu, ZHANG Zhengyan, DING Ning, et al. Pre-trained models: past, present and future[J]. AI open, 2021, 2: 225-250.
[5] LEE J, YOON W, KIM S, et al. BioBERT: a pre-trained biomedical language representation model for biomedical text mining[J]. Bioinformatics, 2020, 36(4): 1234-1240.
[6] BODENREIDER O. The unified medical language system (UMLS): integrating biomedical terminology[J]. Nucleic acids research, 2004, 32(Suppl): D267-D270.
[7] JOHNSON A E W, POLLARD T J, SHEN Lu, et al. MIMIC-III, a freely accessible critical care database[J]. Scientific data, 2016, 3: 160035.
[8] YIN Yanshen, ZHANG Yong, LIU Xiao, et al. HealthQA: a Chinese QA summary system for smart health[C]//International Conference on Smart Health. Cham: Springer, 2014: 51-62.
[9] LI Jianquan, WANG Xidong, WU Xiangbo, et al. Huatuo-26M, a large-scale Chinese medical QA dataset[EB/OL]. (2023-05-02)[2023-12-12]. http://arxiv.org/abs/2305.01526.
[10] HE Junqing, FU Mingming, TU Manshu. Applying deep matching networks to Chinese medical question answering: a study and a dataset[J]. BMC medical informatics and decision making, 2019, 19(Suppl2): 52.
[11] ZHANG Sheng, ZHANG Xin, WANG Hui, et al. Multi-scale attentive interaction networks for Chinese medical question answer selection[J]. IEEE access, 2018, 6: 74061-74071.
[12] LI Yunxiang, LI Zihan, ZHANG Kai, et al. ChatDoctor: a medical chat model fine-tuned on a large language model meta-AI (LLaMA) using medical domain knowledge[J]. Cureus, 2023, 15(6): e40895.
[13] ZENG Guangtao, YANG Wenmian, JU Zeqian, et al. MedDialog: large-scale medical dialogue datasets[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg: Association for Computational Linguistics, 2020: 9241-9250.
[14] HOU Yutai, XIA Yingce, WU Lijun, et al. Discovering drug-target interaction knowledge from biomedical literature[J]. Bioinformatics, 2022, 38(22): 5100-5107.
[15] LI Jiao, SUN Yueping, JOHNSON R J, et al. BioCreative V CDR task corpus: a resource for chemical disease relation extraction[J]. Database, 2016, 2016: baw068.
[16] ZHANG Sheng, XU Yanbo, USUYAMA N, et al. BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs[EB/OL]. (2023-03-02)[2024-01-01]. http://arxiv.org/abs/2303.00915.
[17] HERRERO-ZAZO M, SEGURA-BEDMAR I, MARTíNEZ P, et al. The DDI corpus: an annotated corpus with pharmacological substances and drug-drug interactions[J]. Journal of biomedical informatics, 2013, 46(5): 914-920.
[18] NASTESKI V. An overview of the supervised machine learning methods[J]. Horizons b, 2017, 4: 51-62.
[19] MURALI N, KUCUKKAYA A, PETUKHOVA A, et al. Supervised machine learning in oncology: a clinician’s guide[J]. Digestive disease interventions, 2020, 4(1): 73-81.
[20] DING Ning, QIN Yujia, YANG Guang, et al. Delta tuning: a comprehensive study of parameter efficient methods for pre-trained language models[EB/OL]. (2022-03-15)[2024-01-01]. http://arxiv.org/abs/2203.06904.
[21] HU E J, SHEN Yelong, WALLIS P, et al. LoRA: low-rank adaptation of large language models[EB/OL]. (2021-10-16)[2024-01-01]. http://arxiv.org/abs/2106.09685.
[22] ZHANG Yu, YANG Qiang. A survey on multi-task learning[J]. IEEE transactions on knowledge and data engineering, 2022, 34(12): 5586-5609.
[23] JING Baoyu, XIE Pengtao, XING E. On the automatic generation of medical imaging reports[C]//Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2018: 2577-2586.
[24] 刘侠, 吕志伟, 王波, 等. 联合超声甲状腺结节分割与分类的多任务方法研究[J]. 智能系统学报, 2023, 18(4): 764-774.
LIU Xia, LYU Zhiwei, WANG Bo, et al. Multi-task method for segmentation and classification of thyroid nodules combined with ultrasound images[J]. CAAI transactions on intelligent systems, 2023, 18(4): 764-774.
[25] CHRISTIANO P, LEIKE J, BROWN T B, et al. Deep reinforcement learning from human preferences[EB/OL]. (2017-07-13)[2024-01-01]. http://arxiv.org/abs/1706.03741.
[26] BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[EB/OL]. (2020-07-22)[2024-01-01]. http://arxiv.org/abs/2005.14165.
[27] GUO Zijun, AO Sha, AO Bo. Few-shot learning based oral cancer diagnosis using a dual feature extractor prototypical network[J]. Journal of biomedical informatics, 2024, 150: 104584.
[28] PALATUCCI M, POMERLEAU D, HINTON G, et al. Zero-shot learning with semantic output codes[C]// Proceedings of the 23rd International Conference on Neural Information Processing Systems. New York: ACM, 2009: 1410-1418.
[29] 翟永杰, 张智柏, 王亚茹. 基于改进TransGAN的零样本图像识别方法[J]. 智能系统学报, 2023, 18(2): 352-359.
ZHAI Yongjie, ZHANG Zhibai, WANG Yaru. An image recognition method of zero-shot learning based on an improved TransGAN[J]. CAAI transactions on intelligent systems, 2023, 18(2): 352-359.
[30] KANDPAL N, DENG Haikang, ROBERTS A, et al. Large language models struggle to learn long-tail knowledge[EB/OL]. (2022-11-15)[2024-01-01]. http://arxiv.org/abs/2211.08411.
[31] KOJIMA T, GU S S, REID M, et al. Large language models are zero-shot reasoners[EB/OL]. (2022-10-02)[2024-01-01]. http://arxiv.org/abs/2205.11916.
[32] 马武仁, 弓孟春, 戴辉, 等. 以ChatGPT为代表的大语言模型在临床医学中的应用综述[J]. 医学信息学杂志, 2023, 44(7): 9-17.
MA Wuren, GONG Mengchun, DAI Hui, et al. A comprehensive review of the applications of large language models in clinical medicine with ChatGPT as a representative[J]. Journal of medical informatics, 2023, 44(7): 9-17.
[33] 王和私, 马柯昕. 人工智能翻译应用的对比研究: 以生物医学文本为例[J]. 中国科技翻译, 2023, 36(3): 23-26.
WANG Hesi, MA Kexin. The application of artificial intelligence in biomedical text translation: a comparative study[J]. Chinese science & technology translators journal, 2023, 36(3): 23-26.
[34] 郝洁, 彭庆龙, 丛山, 等. 基于提示学习的医学量表问题文本多分类研究[J]. 中国循证医学杂志, 2024, 24(1): 76-82.
HAO Jie, PENG Qinglong, CONG Shan, et al. A Few-shot classification method for TCM medical records based on hybrid prompt learning[J]. Chinese journal of evidence-based medicine, 2024, 24(1): 76-82.
[35] 姜会珍, 胡海洋, 马琏, 等. 基于医患对话的病历自动生成技术研究[J]. 中国数字医学, 2021, 16(10): 36-40.
JIANG Huizhen, HU Haiyang, MA Lian, et al. Research on automatic generation of electronic medical record based on doctor-patient dialogue[J]. China digital medicine, 2021, 16(10): 36-40.
[36] 杨波, 孙晓虎, 党佳怡, 等. 面向医疗问答系统的大语言模型命名实体识别方法[J]. 计算机科学与探索, 2023, 17(10): 2389-2402.
YANG Bo, SUN Xiaohu, DANG Jiayi, et al. Named entity recognition method of large language model for medical question answering system[J]. Journal of frontiers of computer science and technology, 2023, 17(10): 2389-2402.
[37] AYERS J W, POLIAK A, DREDZE M, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum[J]. JAMA internal medicine, 2023, 183(6): 589-596.
[38] KHANBHAI M, WARREN L, SYMONS J, et al. Using natural language processing to understand, facilitate and maintain continuity in patient experience across transitions of care[J]. International journal of medical informatics, 2022, 157: 104642.
[39] NAKHLEH A, SPITZER S, SHEHADEH N. ChatGPT’s response to the diabetes knowledge questionnaire: implications for diabetes education[J]. Diabetes technology & therapeutics, 2023, 25(8): 571-573.
[40] 陈一鸣, 刘健, 从承志, 等. 强直性脊柱炎患者与Chat GPT的对话实验: 患者教育的新方式[J]. 风湿病与关节炎, 2023, 12(7): 37-43.
CHEN Yiming, LIU Jian, CONG Chengzhi, et al. Dialogue experiment between patients with ankylosing spondylitis and ChatGPT: a new way of patient education[J]. Rheumatism and arthritis, 2023, 12(7): 37-43.
[41] JUNG H, KIM Y, CHOI H, et al. Enhancing clinical efficiency through LLM: discharge note generation for cardiac patients[EB/OL]. (2024-04-08)[2024-05-01]. http://arxiv.org/abs/2404.05144.
[42] 余泽浩, 张雷明, 张梦娜, 等. 基于人工智能的药物研发: 目前的进展和未来的挑战[J]. 中国药科大学学报, 2023, 54(3): 282-293.
YU Zehao, ZHANG Leiming, ZHANG Mengna, et al. Artificial intelligence-based drug development: current progress and future challenges[J]. Journal of China pharmaceutical university, 2023, 54(3): 282-293.
[43] 刘月嫦, 陈紫茹, 杨敏, 等. 国内外大语言模型在临床检验题库中的表现[J]. 临床检验杂志, 2023, 41(12): 941-944.
LIU Yuechang, CHEN Ziru, YANG Min, et al. Performance of domestic and international large language models in question banks of clinical laboratory medicine[J]. Chinese journal of clinical laboratory science, 2023, 41(12): 941-944.
[44] YANG Zhichao, YAO Zonghai, TASMIN M, et al. Performance of multimodal GPT-4V on USMLE with image: potential for imaging diagnostic support with explanations[EB/OL]. (2023-10-26)[2024-01-01]. https://www.medrxiv.org/content/10.1101/2023.10.26.23297629v3.
[45] OH N, CHOI G S, LEE W Y. ChatGPT goes to the operating room: evaluating GPT-4 performance and its potential in surgical education and training in the era of large language models[J]. Annals of surgical treatment and research, 2023, 104(5): 269-273.
[46] DANON L M, B?HR V, SCHIFF E, et al. Learning to establish a therapeutic doctor-patient communication: German and Israeli medical students experiencing integrative medicine’s skills[J]. Social science, humanities and sustainability research, 2021, 2(4): 48.
[47] NORI H, KING N, MCKINNEY S M, et al. Capabilities of GPT-4 on medical challenge problems[EB/OL]. (2023-04-12)[2024-01-01]. http://arxiv.org/abs/2303.13375.
[48] UEDA D, WALSTON S L, MATSUMOTO T, et al. Evaluating GPT-4-based ChatGPT’s clinical potential on the NEJM quiz[J]. BMC digital health, 2024, 2(1): 4.
[49] FINK M A, BISCHOFF A, FINK C A, et al. Potential of ChatGPT and GPT-4 for data mining of free-text CT reports on lung cancer[J]. Radiology, 2023, 308(3): e231362.
[50] DEVLIN J, CHANG Mingwei, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[EB/OL]. (2018-10-11)[2024-01-01]. http://arxiv.org/abs/1810.04805.
[51] YOON W, LEE J, KIM D, et al. Pre-trained language model for biomedical question answering[M]//Communications in Computer and Information Science. Cham: Springer International Publishing, 2020: 727-740.
[52] SINGHAL K, AZIZI S, TU Tao, et al. Large language models encode clinical knowledge[J]. Nature, 2023, 620: 172-180.
[53] ANIL R, DAI A M, FIRAT O, et al. PaLM 2 technical report[EB/OL]. (2023-09-13)[2024-01-01]. http://arxiv.org/abs/2305.10403.
[54] SINGHAL K, TU Tao, GOTTWEIS J, et al. Towards expert-level medical question answering with large language models[EB/OL]. (2023-05-16)[2024-01-01]. http://arxiv.org/abs/2305.09617.
[55] LUO Renqian, SUN Liai, XIA Yingce, et al. BioGPT: generative pre-trained transformer for biomedical text generation and mining[J]. Briefings in bioinformatics, 2022, 23(6): bbac409.
[56] ZHANG Kai, ZHOU Rong, ADHIKARLA E, et al. BiomedGPT: a generalist vision-language foundation model for diverse biomedical tasks[EB/OL]. (2023-05-26)[2024-01-01]. http://arxiv.org/abs/2305.17100.
[57] LI Chunyuan, WONG C, ZHANG Sheng, et al. LLaVA-med: training a large language-and-vision assistant for biomedicine in one day[EB/OL]. (2023-06-01)[2024-01-01]. http://arxiv.org/abs/2306.00890.
[58] HAN Tianyu, ADAMS L C, PAPAIOANNOU J M, et al. MedAlpaca: an open-source collection of medical conversational AI models and training data[EB/OL]. (2023-10-04)[2024-01-01]. http://arxiv.org/abs/2304.08247.
[59] LI Wenqiang, YU Lina, WU Min, et al. DoctorGPT: a large language model with Chinese medical question-answering capabilities[C]//2023 International Conference on High Performance Big Data and Intelligent Systems. Macau: IEEE, 2023: 186-193.
[60] XIONG Honglin, WANG Sheng, ZHU Yitao, et al. DoctorGLM: fine-tuning your Chinese doctor is not a Herculean task[EB/OL]. (2023-04-17)[2024-01-01]. http://arxiv.org/abs/2304.01097.
[61] WANG Haochun, LIU Chi, XI Nuwa, et al. HuaTuo: tuning LLaMA model with Chinese medical knowledge[EB/OL]. (2023-04-14)[2024-01-01]. http://arxiv.org/abs/2304.06975.
[62] 奥德玛, 杨云飞, 穗志方, 等. 中文医学知识图谱CMeKG构建初探[J]. 中文信息学报, 2019, 33(10): 1-7.
ODMAA, YANG Yunfei, SUI Zhifang, et al. Preliminary study on the construction of Chinese medical knowledge graph[J]. Journal of Chinese information processing, 2019, 33(10): 1-7.
[63] ZHANG Hongbo, CHEN Junying, JIANG Feng, et al. HuatuoGPT, towards taming language model to be a doctor[C]//Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg: Association for Computational Linguistics, 2023: 10859-10885.
[64] COLIN R, NOAM S, ADAM R, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of machine learning research, 2020, 21(140): 1-67.
[65] ZHAO Haiyan, CHEN Hanjie, YANG Fan, et al. Explainability for large language models: a survey[J]. ACM transactions on intelligent systems and technology, 2024, 15(2): 1-38.
[66] AMANN J, BLASIMME A, VAYENA E, et al. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective[J]. BMC medical informatics and decision making, 2020, 20: 1-9.
[67] RUDIN C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead[J]. Nature machine intelligence, 2019, 1(5): 206-215.
[68] HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. (2015-03-09)[2024-01-01]. http://arxiv.org/abs/1503.02531.
[69] DOSHI-VELEZ F, KIM B. Towards A rigorous science of interpretable machine learning[EB/OL]. (2017-03-02)[2024-01-01]. http://arxiv.org/abs/1702.08608.
[70] WANG Danding, YANG Qian, ABDUL A, et al. Designing theory-driven user-centric explainable AI[C]//Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. Glasgow: ACM, 2019: 1-15.
[71] LUNDBERG S, LEE S I. A unified approach to interpreting model predictions[EB/OL]. (2017-11-25)[2024-01-01]. http://arxiv.org/abs/1705.07874.
[72] ALAMMAR J. Ecco: an open source library for the explainability of transformer language models[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations. Stroudsburg: Association for Computational Linguistics, 2021: 249-257.
[73] PAN J Z, RAZNIEWSKI S, KALO J C, et al. Large language models and knowledge graphs: opportunities and challenges[EB/OL]. (2023-08-11)[2024-01-01]. http://arxiv.org/abs/2308.06374.
[74] YE Hongbin, LIU Tong, ZHANG Aijia, et al. Cognitive mirage: a review of hallucinations in large language models[EB/OL]. (2023-09-13)[2024-01-01]. http://arxiv.org/abs/2309.06794.
[75] 陈小平. 大模型关联度预测的形式化和语义解释研究[J]. 智能系统学报, 2023, 18(4): 894-900.
CHEN Xiaoping. Research on formalization and semantic interpretations of correlation degree prediction in large language models[J]. CAAI transactions on intelligent systems, 2023, 18(4): 894-900.
[76] ZHANG Muru, PRESS O, MERRILL W, et al. How language model hallucinations can snowball[EB/OL]. (2023-05-22)[2024-01-01]. http://arxiv.org/abs/2305.13534.
[77] ALKAISSI H, MCFARLANE S I. Artificial hallucinations in ChatGPT: implications in scientific writing[J]. Cureus, 2023, 15(2): e35179.
[78] TANG Liyan, SUN Zhaoyi, IDNAY B, et al. Evaluating large language models on medical evidence summarization[J]. NPJ digital medicine, 2023, 6: 158.
[79] GOODMAN K E, YI P H, MORGAN D J. AI-generated clinical summaries require more than accuracy[J]. JAMA, 2024, 331(8): 637-638.
[80] YU Wenhao, ZHANG Zhihan, LIANG Zhenwen, et al. Improving language models via plug-and-play retrieval feedback[EB/OL]. (2023-05-23)[2024-01-01]. http://arxiv.org/abs/2305.14002.
[81] MARTINO A, IANNELLI M, TRUONG C. Knowledge injection to Counter large language model (LLM) hallucination[M]//Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, 2023: 182-185.
[82] PAL A, SANKARASUBBU M. Gemini goes to med school: exploring the capabilities of multimodal large language models on medical challenge problems & hallucinations[EB/OL]. (2024-02-10)[2024-05-01]. http://arxiv.org/abs/2402.07023.
[83] STAAB R, VERO M, BALUNOVI? M, et al. Beyond memorization: violating privacy via inference with large language models[EB/OL]. (2023-11-11)[2024-01-01]. http://arxiv.org/abs/2310.07298.
[84] MESKó B, TOPOL E J. The imperative for regulatory oversight of large language models (or generative AI) in healthcare[J]. NPJ digital medicine, 2023, 6: 120.
[85] ZHANG Chen, XIE Yu, BAI Hang, et al. A survey on federated learning[J]. Knowledge-based systems, 2021, 216: 106775.
[86] YU Da, NAIK S, BACKURS A, et al. Differentially private fine-tuning of language models[EB/OL]. (2021-10-13)[2024-01-01]. http://arxiv.org/abs/2110.06500.
[87] SOIN A, BHATU P, TAKHAR R, et al. Multi-institution encrypted medical imaging AI validation without data sharing[EB/OL]. (2021-08-13)[2024-01-01]. http://arxiv.org/abs/2107.10230.
[88] ZACK T, LEHMAN E, SUZGUN M, et al. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study[J]. The lancet digital health, 2024, 6(1): e12-e22.
[89] WEIDINGER L, MELLOR J, RAUH M, et al. Ethical and social risks of harm from Language Models[EB/OL]. (2021-12-08)[2024-01-01]. http://arxiv.org/abs/2112.04359.
[90] 古天龙, 马露, 李龙, 等. 符合伦理的人工智能应用的价值敏感设计: 现状与展望[J]. 智能系统学报, 2022, 17(1): 2-15.
GU Tianlong, MA Lu, LI Long, et al. Value sensitive design of ethical-aligned AI applications: current situation and prospect[J]. CAAI transactions on intelligent systems, 2022, 17(1): 2-15.
[91] 刘学博, 户保田, 陈科海, 等. 大模型关键技术与未来发展方向——从ChatGPT谈起[J]. 中国科学基金, 2023, 37(5): 758-766.
LIU Xuebo, HU Baotian, CHEN Kehai, et al. Key technologies and future development directions of large models — Starting from ChatGPT[J]. Science foundation in China, 2023, 37(5): 758-766.
[92] ZHOU Ying, LI Zheng, LI Yingxin. Interdisciplinary collaboration between nursing and engineering in health care: a scoping review[J]. International journal of nursing studies, 2021, 117: 103900.
[93] World Health Organization. Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models[M]. Geneva: World Health Organization, 2024.
[94] ZHAO Zihao, LIU Yuxiao, WU Han, et al. CLIP in medical imaging: a comprehensive survey[EB/OL]. (2023-12-26)[2024-05-01]. http://arxiv.org/abs/2312.07353.
[95] 丁维昌, 施俊, 王骏. 自监督对比特征学习的多模态乳腺超声诊断[J]. 智能系统学报, 2023, 18(1): 66-74.
DING Weichang, SHI Jun, WANG Jun. Multi-modality ultrasound diagnosis of the breast with self-supervised contrastive feature learning[J]. CAAI transactions on intelligent systems, 2023, 18(1): 66-74.
[96] TOPOL E J. As artificial intelligence goes multimodal, medical applications multiply[J]. Science, 2023, 381(6663): adk6139.
[97] 高晗, 田育龙, 许封元, 等. 深度学习模型压缩与加速综述[J]. 软件学报, 2020, 32(1): 68-92.
GAO Han, TIAN Yulong, XU Fengyuan, et al. Survey of deep learning model compression and acceleration[J]. Journal of software, 2020, 32(1): 68-92.
[98] GOU Jianping, YU Baosheng, MAYBANK S J, et al. Knowledge distillation: a survey[J]. International journal of computer vision, 2021, 129(6): 1789-1819.
[99] ULLRICH K, MEEDS E, WELLING M. Soft weight-sharing for neural network compression[EB/OL]. (2017-05-19)[2024-01-01]. http://arxiv.org/abs/1702.04008.
[100] LIU Zhuang, SUN Mingjie, ZHOU Tinghui, et al. Rethinking the value of network pruning[EB/OL]. (2018-11-11)[2024-01-01]. http://arxiv.org/abs/1810.05270.
[101] KAMBHAMPATI S, VALMEEKAM K, GUAN Lin, et al. LLMs can’t plan, but can help planning in LLM-modulo frameworks[EB/OL]. (2024-02-12)[2024-07-01]. http://arxiv.org/abs/2402.01817.
[102] XI Zhiheng, CHEN Wenxiang, GUO Xin, et al. The rise and potential of large language model based agents: a survey[EB/OL]. (2023-09-19)[2024-01-01]. http://arxiv.org/abs/2309.07864.
[103] MOOR M, BANERJEE O, ABAD Z S H, et al. Foundation models for generalist medical artificial intelligence[J]. Nature, 2023, 616: 259-265.
[104] 陈小平. 人工智能中的封闭性和强封闭性: 现有成果的能力边界、应用条件和伦理风险[J]. 智能系统学报, 2020, 15(1): 114-120.
CHEN Xiaoping. Criteria of closeness and strong closeness in artificial intelligence—limits, application conditions and ethical risks of existing technologies[J]. CAAI transactions on intelligent systems, 2020, 15(1): 114-120.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems