[1]黄河燕,李思霖,兰天伟,等.大语言模型安全性:分类、评估、归因、缓解、展望[J].智能系统学报,2025,20(1):2-32.[doi:10.11992/tis.202401006]
HUANG Heyan,LI Silin,LAN Tianwei,et al.A survey on the safety of large language model: classification, evaluation, attribution, mitigation and prospect[J].CAAI Transactions on Intelligent Systems,2025,20(1):2-32.[doi:10.11992/tis.202401006]
点击复制
《智能系统学报》[ISSN 1673-4785/CN 23-1538/TP] 卷:
20
期数:
2025年第1期
页码:
2-32
栏目:
综述
出版日期:
2025-01-05
- Title:
-
A survey on the safety of large language model: classification, evaluation, attribution, mitigation and prospect
- 作者:
-
黄河燕1, 李思霖1, 兰天伟1, 邱昱力1, 柳泽明2, 姚嘉树1, 曾理1, 单赢宇1, 施晓明3, 郭宇航1
-
1. 北京理工大学 计算机学院, 北京 100081;
2. 北京航空航天大学 计算机学院, 北京 100191;
3. 哈尔滨工业大学 计算机学院社会计算与信息检索研究中心, 黑龙江 哈尔滨 150001
- Author(s):
-
HUANG Heyan1, LI Silin1, LAN Tianwei1, QIU Yuli1, LIU Zeming2, YAO Jiashu1, ZENG Li1, SHAN Yingyu1, SHI Xiaoming3, GUO Yuhang1
-
1. School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China;
2. School of Computer Science and Engineering, Beihang University, Beijing 100191, China;
3. Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, Harbin 150001, China
-
- 关键词:
-
大语言模型; 模型自身安全性; 生成内容安全性; 安全性分类; 安全性风险评估; 安全性风险归因; 安全性风险缓解措施; 安全性研究展望
- Keywords:
-
large language model; model safety; generated content safety; safety classification; safety risk evaluation; safety risk attribution; safety risk mitigation measures; safety research prospect
- 分类号:
-
TP39
- DOI:
-
10.11992/tis.202401006
- 摘要:
-
大语言模型能够在多个领域及任务上给出与人类水平相当的解答,并且在未经训练的领域和任务上展现了丰富的涌现能力。然而,目前基于大语言模型的人工智能系统存在许多安全性隐患,例如大语言模型系统容易受到难以被察觉的攻击,模型生成的内容存在违法、泄密、仇恨、偏见、错误等问题。并且在实际应用中,大语言模型可能被滥用,生成的内容可能引起国家、人群和领域等多个层面的困扰。本文旨在深入探讨大语言模型面临的安全性风险并进行分类,回顾现有的评估方法,研究安全性风险背后的因果机制,并总结现有的解决措施。具体而言,本文明确了大语言模型面临的10种安全性风险,并将其归类为模型自身安全性风险与生成内容的安全性风险两个方面,并对每种风险进行了详细的分析和讲解。此外,本文还从生命周期和危害程度两个角度对大语言模型的安全风险进行了系统化的分析,并介绍了现有的大语言模型安全风险评估方法、大语言模型安全风险的出现原因以及相应的缓解措施。大语言模型的安全风险是亟待解决的重要问题。
- Abstract:
-
Large language models can provide answers comparable to human levels in multiple fields. It demonstrates a wealth of emergent capabilities in fields and tasks that have not been trained. However, at present, there are many hidden dangers in artificial intelligence system based on large language model. The artificial intelligence systems based on large language model have many potential safety hazard. For example, large language models are vulnerable to undetectable attacks, including intricately elusive ones. The content generated by those models may have problems such as illegality, leaks, hatred, bias, errors, etc. What’s more, in practical applications, the abuse of large language models is also an important issue. The content generated by the model may cause troubles at multiple levels such as countries, social groups, and fields. This paper aims to deeply explore and classify the safety risks faced by large language models, review existing evaluation methods, study the causal mechanisms behind the safety risks, and summarizes existing solutions. Specifically, this paper identifies 10 safety risks of large language models and categorizes them into two aspects: the safety risks of the model itself and the safety risks of the generated content. What’s more, this paper systematically analyzes the safety risks of the large language model itself from two perspectives of life cycle and hazard level, and introduces the methods for risk assessment of existing large language models, the causes for occurrence of safety risks of large language model and corresponding mitigation methods. The safety risk of large language models is an important issue that needs to be solved urgently.
备注/Memo
收稿日期:2024-1-3。
基金项目:国家自然科学基金项目(U21B2009);科技创新2030—“新一代人工智能”重大项目(2020AAA0106601).
作者简介:黄河燕,教授,兼任北京市海量语言信息处理与云计算应用工程技术研究中心主任,主要研究方向为机器翻译和自然语言处理,主持承担了国家重点研发计划项目、国家自科科学基金重点项目、国家高技术研究发展计划课题等20多项国家级科研攻关项目,获得国家科技进步一等奖等10余项国家级和省部级奖励,1997年享受国务院政府特殊津贴,2014年当选“全国优秀科技工作者”。E-mail:hhy63@bit.edu.cn。;李思霖,硕士,主要研究方向为信息抽取与语言模型安全性。E-mail:lisilin87@outlook.com。;郭宇航,讲师,主要研究方向为自然语言处理、信息抽取、机器翻译、机器学习、人工智能。E-mail:guoyuhang@bit.edu.cn。
通讯作者:郭宇航. E-mail:guoyuhang@bit.edu.cn
更新日期/Last Update:
2025-01-05