[1]肖建力,黄星宇,姜飞.智慧教育中的大语言模型综述[J].智能系统学报,2025,20(5):1054-1070.[doi:10.11992/tis.202406040]
 XIAO Jianli,HUANG Xingyu,JIANG Fei.A survey of large language models in smart education[J].CAAI Transactions on Intelligent Systems,2025,20(5):1054-1070.[doi:10.11992/tis.202406040]
点击复制

智慧教育中的大语言模型综述

参考文献/References:
[1] 张岩. “互联网+教育” 理念及模式探析[J]. 中国高教研究, 2016(2): 70-73.
ZHANG Yan. On the concept and mode of “Internet plus education”[J]. China higher education research, 2016(2): 70-73.
[2] 马世龙, 乌尼日其其格, 李小平. 大数据与深度学习综述[J]. 智能系统学报, 2016, 11(6): 728-742.
MA Shilong, WUNIRI Q Q G, LI Xiaoping. Deep learning with big data: state of the art and development[J]. CAAI transactions on intelligent systems, 2016, 11(6): 728-742.
[3] ZHAO W X, ZHOU Kun, LI Junyi, et al. A survey of large language models[EB/OL]. (2023-11-24)[2024-06-20]. https://arxiv.org/abs/2303.18223.
[4] 余胜泉, 熊莎莎. 基于大模型增强的通用人工智能教师架构[J]. 开放教育研究, 2024, 30(1): 33-43.
YU Shengquan, XIONG Shasha. General artificial intelligence teacher architecture based on enhanced pre-trained large models[J]. Open education research, 2024, 30(1): 33-43.
[5] 曹培杰. 智慧教育: 人工智能时代的教育变革[J]. 教育研究, 2018, 39(8): 121-128.
CAO Peijie. Smart education: the educational reform at the age of artificial intelligence[J]. Educational research, 2018, 39(8): 121-128.
[6] KASNECI E, SESSLER K, KüCHEMANN S, et al. ChatGPT for good? On opportunities and challenges of large language models for education[J]. Learning and individual differences, 2023, 103: 102274.
[7] 曹培杰, 谢阳斌, 武卉紫, 等. 教育大模型的发展现状、创新架构及应用展望[J]. 现代教育技术, 2024, 34(2): 5-12.
CAO Peijie, XIE Yangbin, WU Huizi, et al. The development status, innovation architecture and application prospects of educational big models[J]. Modern educational technology, 2024, 34(2): 5-12.
[8] GAN Wensheng, QI Zhenlian, WU Jiayang, et al. Large language models in education: vision and opportunities[C]//2023 IEEE International Conference on Big Data. Sorrento: IEEE, 2023: 4776-4785.
[9] 祝智庭, 卢琳萌, 王馨怡, 等. 智慧教育理论与实践在中国的发展: 十年回顾与近未来展望[J]. 中国远程教育, 2023(12): 21-33.
ZHU Zhiting, LU Linmeng, WANG Xinyi, et al. The development of smart education theory and practice in China: a ten-year review and near-future prospects[J]. Chinese journal of distance education, 2023(12): 21-33.
[10] RUDOLPH J, TAN S, TAN S. ChatGPT: bullshit spewer or the end of traditional assessments in higher education?[J]. Journal of applied learning and teaching, 2023, 6(1): 342-363.
[11] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30: 5998-6008.
[12] RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019, 1(8): 9.
[13] RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text Transformer[J]. Journal of machine learning research, 2020, 21(1): 5485-5551.
[14] BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver: ACM, 2020: 1877-1901.
[15] REN Xiaozhe, ZHOU Pingyi, MENG Xinfan, et al. PanGu-Σ: towards trillion parameter language model with sparse heterogeneous computing[EB/OL]. (2023-05-20)[2024-06-20]. https://arxiv.org/abs/2303.10845.
[16] OpenAI. GPT-4[EB/OL]. (2024-03-08)[2024-06-20]. https://openai.com/gpt-4.
[17] BOMMASANI R, HUDSON D A, ADELI E, et al. On the opportunities and risks of foundation models[EB/OL]. (2022-07-12)[2024-06-20]. https://arxiv.org/abs/2108.07258.
[18] WANG Shen, XU Tianlong, LI Hang, et al. Large language models for education: a survey and outlook[EB/OL]. (2024-04-01)[2024-06-20]. https://arxiv.org/abs/2403.18105.
[19] GAO Tianyu, FISCH A, CHEN Danqi. Making pre-trained language models better few-shot learners[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. [S.l.]: ACL, 2021: 3816-3830.
[20] SCHICK T, SCHüTZE H. Exploiting cloze questions for few shot text classification and natural language inference[EB/OL]. (2021-01-25)[2024-06-20]. https://arxiv.org/abs/2001.07676.
[21] WEI J, WANG Xuezhi, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans: ACM, 2022: 24824-24837.
[22] 唐雯谦, 覃成海, 向艳, 等. 智慧教育与个性化学习理论与实践研究[J]. 中国电化教育, 2021(5): 124-137.
TANG Wenqian, QIN Chenghai, XIANG Yan, et al. Research on theory and practice of intelligent education and personalized learning[J]. China educational technology, 2021(5): 124-137.
[23] ZUBIRI-ESNAOLA H, VIDU A, RIOS-GONZALEZ O, et al. Inclusivity, participation and collaboration: learning in interactive groups[J]. Educational research, 2020, 62(2): 162-180.
[24] 安涛, 赵可云. 大数据时代的教育技术发展取向[J]. 现代教育技术, 2016, 26(2): 27-32.
AN Tao, ZHAO Keyun. The developmental orientation of educational technology in the big data era[J]. Modern educational technology, 2016, 26(2): 27-32.
[25] RAWAS S. ChatGPT: Empowering lifelong learning in the digital age of higher education[J]. Education and information technologies, 2024, 29(6): 6895-6908.
[26] 刘凤娟, 赵蔚, 姜强, 等. 基于知识图谱的个性化学习模型与支持机制研究[J]. 中国电化教育, 2022(5): 75-81,90.
LIU Fengjuan, ZHAO Wei, JIANG Qiang, et al. Research on personalized learning model and support mechanism based on knowledge graph[J]. China educational technology, 2022(5): 75-81, 90.
[27] LEE U, JUNG H, JEON Y, et al. Few-shot is enough: exploring ChatGPT prompt engineering method for automatic question generation in English education[J]. Education and information technologies, 2024, 29(9): 11483-11515.
[28] HU Wenbo, XU Yifan, LI Yi, et al. BLIVA: a simple multimodal LLM for better handling of text-rich visual questions[J]. Proceedings of the AAAI conference on artificial intelligence, 2024, 38(3): 2256-2264.
[29] ZHAN Peida, HE Keren. A longitudinal diagnostic model with hierarchical learning trajectories[J]. Educational measurement: issues and practice, 2021, 40(3): 18-30.
[30] CHOWDHERY A, NARANG S, DEVLIN J, et al. PaLM: scaling language modeling with pathways[J]. Journal of machine learning research, 2023, 24(240): 1-113.
[31] TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: open and efficient foundation language models[EB/OL]. (2023-02-27)[2024-06-20]. https://arxiv.org/abs/2302.13971.
[32] OpenAI. Introducing ChatGPT[EB/OL]. (2024-03-11) [2024-06-20]. https://openai.com/blog/chatgpt.
[33] 吴娟, 周建蓉, 卢仪珂, 等. 基于复杂学习设计的在线写作模型构建与应用[J]. 电化教育研究, 2024, 45(1): 108-113,121.
WU Juan, ZHOU Jianrong, LU Yike, et al. Construction and application of an online writing model based on complex learning design[J]. E-education research, 2024, 45(1): 108-113,121.
[34] 王洪鑫, 闫志明, 陈效玉, 等. 面向MOOC课程评论的主题挖掘与情感分析研究[J]. 开放学习研究, 2021, 26(4): 16-23.
WANG Hongxin, YAN Zhiming, CHEN Xiaoyu, et al. Research on topic mining and emotion analysis for MOOCs course review[J]. Journal of open learning, 2021, 26(4): 16-23.
[35] DAN Yuhao, LEI Zhikai, GU Yiyang, et al. EduChat: a large-scale language model-based chatbot system for intelligent education[EB/OL]. (2023-08-05)[2024-06-20]. https://arxiv.org/abs/2308.02773.
[36] BAI Jinze, BAI Shuai, CHU Yunfei, et al. Qwen technical report[EB/OL]. (2023-09-28)[2024-06-20]. https://arxiv.org/abs/2309.16609.
[37] XU Liang, LI Anqi, ZHU Lei, et al. SuperCLUE: a comprehensive Chinese large language model benchmark[EB/OL]. (2023-07-27)[2024-06-20]. https://arxiv.org/abs/2307.15020.
[38] 刘莉, 刘铁芳. 重审苏格拉底的 “产婆术”[J]. 全球教育展望, 2021, 50(9): 46-62.
LIU Li, LIU Tiefang. A reexamination of Socrates’ “midwifery”[J]. Global education, 2021, 50(9): 46-62.
[39] 陈静远, 吴韬, 吴飞. 课程、教材、平台三位一体的“人工智能引论”育人基座能力建设[J]. 计算机教育, 2023(11): 34-37.
CHEN Jingyuan, WU Tao, WU Fei. Construction of the educational foundation for "Introduction to Artificial Intelligence" with integration of courses, textbooks, and platforms[J]. Computer education, 2023(11): 34-37.
[40] HUANG Yuzhen, BAI Yuzhuo, ZHU Zhihao, et al. C-Eval: a multi-level multi-discipline Chinese evaluation suite for foundation models[J]. Advances in neural information processing systems, 2023, 36: 62991-63010.
[41] GILSON A, SAFRANEK C W, HUANG T, et al. How does ChatGPT perform on the United States medical licensing examination (USMLE)? the implications of large language models for medical education and knowledge assessment[J]. JMIR medical education, 2023, 9: e45312.
[42] KUNG T H, CHEATHAM M, MEDENILLA A, et al. Performance of ChatGPT on USMLE: potential for AI-assisted medical education using large language models[J]. PLoS digital health, 2023, 2(2): e0000198.
[43] RIZZO M G, CAI N, CONSTANTINESCU D. The performance of ChatGPT on orthopaedic in-service training exams: a comparative study of the GPT-3.5 turbo and GPT-4 models in orthopaedic education[J]. Journal of orthopaedics, 2024, 50: 70-75.
[44] NEUMANN M, RAUSCHENBERGER M, SCH?N E M. “We need to talk about ChatGPT”: the future of AI and higher education[C]//2023 IEEE/ACM 5th International Workshop on Software Engineering Education for the Next Generation (SEENG). Melbourne: IEEE, 2023: 29-32.
[45] DEMPERE J, MODUGU K, HESHAM A, et al. The impact of ChatGPT on higher education[J]. Frontiers in education, 2023, 8: 1206936.
[46] POLVERINI G, GREGORCIC B. How understanding large language models can inform the use of ChatGPT in physics education[J]. European journal of physics, 2024, 45(2): 025701.
[47] KIESER F, WULFF P, KUHN J, et al. Educational data augmentation in physics education research using ChatGPT[J]. Physical review physics education research, 2023, 19(2): 020150.
[48] TSAI M L, ONG C W, CHEN Chengliang. Exploring the use of large language models (LLMs) in chemical engineering education: building core course problem models with Chat-GPT[J]. Education for chemical engineers, 2023, 44: 71-95.
[49] DAI Wei, LIN Jionghao, JIN Hua, et al. Can large language models provide feedback to students? A case study on ChatGPT[C]//2023 IEEE International Conference on Advanced Learning Technologies. Orem: IEEE, 2023: 323-325.
[50] WANG R E, DEMSZKY D. Is ChatGPT a good teacher coach? Measuring zero-shot performance for scoring and providing actionable insights on classroom instruction[EB/OL]. (2023-06-05)[2024-06-20]. https://arxiv.org/abs/2306.03090.
[51] GAO L, BIDERMAN S, BLACK S, et al. The Pile: an 800GB dataset of diverse text for language modeling[EB/OL]. (2020-12-31)[2024-06-20]. https://arxiv.org/abs/2101.00027.
[52] PATEL J M. Introduction to common crawl datasets[M]//Getting Structured Data from the Internet. Berkeley: Apress, 2020: 277-324.
[53] BANDY J, VINCENT N. Addressing “documentation debt” in machine learning research: a retrospective datasheet for BookCorpus[EB/OL]. (2021-05-11)[2024-06-20]. https://arxiv.org/abs/2105.05241.
[54] RAJPURKAR P, JIA R, LIANG P. Know what you don’t know: unanswerable questions for SQuAD[EB/OL]. (2018-06-11)[2024-06-20]. https://arxiv.org/abs/1806.03822.
[55] BOWMAN S R, ANGELI G, POTTS C, et al. A large annotated corpus for learning natural language inference[EB/OL]. (2015-08-21)[2024-06-20]. https://arxiv.org/abs/1508.05326.
[56] HAO Bin, ZHANG Min, MA Weizhi, et al. A large-scale rich context query and recommendation dataset in online knowledge-sharing[EB/OL]. (2021-06-11)[2024-06-20]. https://arxiv.org/abs/2106.06467.
[57] LI Wenhao, QI Fanchao, SUN Maosong, et al. CCPM: a Chinese classical poetry matching dataset[EB/OL]. (2021-06-03)[2024-06-20]. https://arxiv.org/abs/2106.01979.
[58] ZHENG Chujie, HUANG Minlie, SUN Aixin. ChID: a large-scale Chinese IDiom dataset for cloze test[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Florence: ACL, 2019: 778–787.
[59] SHAO C C, LIU T, LAI Yuting, et al. DRCD: a Chinese machine reading comprehension dataset[EB/OL]. (2019-05-29)[2024-06-20]. https://arxiv.org/abs/1806.00920.
[60] HENDRYCKS D, BASART S, KADAVATH S, et al. Measuring coding challenge competence with APPS[EB/OL]. (2021-11-08)[2024-06-20]. https://arxiv.org/abs/2105.09938.
[61] SANH V, WEBSON A, RAFFEL C, et al. Multitask prompted training enables zero-shot task generalization[EB/OL]. (2021-10-15)[2024-06-20]. https://arxiv.org/abs/2110.08207.
[62] BLACK S, BIDERMAN S, HALLAHAN E, et al. GPT-NeoX-20B: an open-source autoregressive language model[EB/OL]. (2022-04-14)[2024-06-20]. https://arxiv.org/abs/2204.06745.
[63] ZHANG Susan, ROLLER S, GOYAL N, et al. OPT: open pre-trained transformer language models[EB/OL]. (2022-06-21) [2024-06-20]. https://arxiv.org/abs/2205.01068.
[64] LE SCAO T, FAN A, AKIKI C, et al. BLOOM: a 176B-parameter open-access multilingual language model[EB/OL]. (2022-11-09) [2024-06-20]. https://arxiv.org/abs/2211.05100v4.
[65] TOUVRON H, MARTIN L, STONE K, et al. LLaMA 2: open foundation and fine-tuned chat models[EB/OL]. (2023-07-19) [2024-06-20]. https://arxiv.org/abs/2307.09288.
[66] TEAM G, MESNARD T, HARDIN C, et al. Gemma: open models based on Gemini research and technology[EB/OL]. (2024-04-16) [2024-06-20]. https://arxiv.org/abs/2403.08295.
[67] DUBEY A, JAUHRI A, PANDEY A, et al. The LLaMA 3 herd of models[EB/OL]. (2024-08-15) [2024-11-11]. https://arxiv.org/abs/2407.21783.
[68] TEAM G, RIVIERE M, PATHAK S, et al. Gemma 2: improving open language models at a practical size[EB/OL]. (2024-10-02) [2024-11-11]. https://arxiv.org/abs/2408.00118.
[69] ZENG Wei, REN Xiaozhe, SU Teng, et al. PanGu-α: large-scale autoregressive pretrained Chinese language models with auto-parallel computation[EB/OL]. (2021-04-26)[2024-06-20]. https://arxiv.org/abs/2104.12369.
[70] ZENG Aohan, XU Bin, WANG Bowen, et al. ChatGLM: a family of large language models from GLM-130B to GLM-4 all tools[EB/OL]. (2024-07-30)[2024-11-11]. https://arxiv.org/abs/2406.12793.
[71] YANG Aiyuan, XIAO Bin, WANG Bingning, et al. Baichuan 2: open large-scale language models[EB/OL]. (2023-09-20)[2024-06-20]. https://arxiv.org/abs/2309.10305.
[72] YOUNG A, CHEN Bei, LI Chao, et al. Yi: open foundation models by 01.AI[EB/OL]. (2024-03-07)[2024-06-20]. https://arxiv.org/abs/2403.04652.
[73] WU Shaohua, ZHAO Xudong, WANG Shenling, et al. YUAN 2.0: a large language model with localized filtering-based attention[EB/OL]. (2023-12-18)[2024-06-20]. https://arxiv.org/abs/2311.15786.
[74] Qwen Team. Introducing Qwen1.5[EB/OL]. (2024-02-04)[2024-06-20]. https://qwenlm.github.io/blog/qwen1.5/.
[75] YANG An, YANG Baosong, HUI Binyuan, et al. Qwen2 technical report[EB/OL]. (2024-09-10)[2024-11-11]. https://arxiv.org/abs/2407.10671.
[76] HOULSBY N, GIURGIU A, JASTRZEBSKI S, et al. Parameter-efficient transfer learning for NLP[C]//International conference on machine learning. Los Angeles: PMLR, 2019: 2790-2799.
[77] LI X L, LIANG P. Prefix-tuning: optimizing continuous prompts for generation[EB/OL]. (2021-01-01)[2024-06-20]. https://arxiv.org/abs/2101.00190.
[78] LIU Xiao, ZHENG Yanan, DU Zhengxiao, et al. GPT understands, too[J]. AI open, 2024, 5: 208-215.
[79] LESTER B, AL-RFOU R, CONSTANT N, et al. The power of scale for parameter-efficient prompt tuning[EB/OL]. (2021-09-02) [2024-06-20]. https://arxiv.org/abs/2104.08691.
[80] BEN ZAKEN E, RAVFOGEL S, GOLDBERG Y. BitFit: simple parameter-efficient fine-tuning for Transformer-based masked language-models[EB/OL]. (2022-09-05) [2024-06-20]. https://arxiv.org/abs/2106.10199.
[81] HU J E, SHEN Yelong, WALLIS P, et al. LoRA: low-rank adaptation of large language models[EB/OL]. (2021-10-16) [2024-06-20]. https://arxiv.org/abs/2106.09685.
[82] LIU Xiao, JI Kaixuan, FU Yicheng, et al. P-Tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks[EB/OL]. (2021-10-18) [2024-06-20]. https://arxiv.org/abs/2110.07602.
[83] ZHANG Qingru, CHEN Minshuo, BUKHARIN A, et al. AdaLoRA: adaptive budget allocation for parameter-efficient fine-tuning[EB/OL]. (2024-07-09) [2024-11-11]. https://arxiv.org/abs/2303.10512.
[84] DETTMERS T, PAGNONI A, HOLTZMAN A, et al. QLORA: efficient finetuning of quantized LLMs[C]//Proceedings of the 37th International Conference on Neural Information Processing Systems. New Orleans: Curran Associates Inc., 2023: 10088-10115.
[85] LIU S Y, WANG C Y, YIN Hongxu, et al. DoRA: weight-decomposed low-rank adaptation[EB/OL]. (2024-07-09) [2024-11-11]. https://arxiv.org/abs/2402.09353.
[86] LIU Siyang, ZHENG Chujie, DEMASI O, et al. Towards emotional support dialog systems[EB/OL]. (2021-06-02)[2024-06-20]. https://arxiv.org/abs/2106.01144.
[87] HENDRYCKS D, BURNS C, BASART S, et al. Measuring massive multitask language understanding[EB/OL]. (2020-09-21) [2024-06-20]. https://arxiv.org/abs/2009.03300.
[88] SAKAGUCHI K, LE BRAS R, BHAGAVATULA C, et al. WinoGrande: an adversarial winograd schema challenge at scale[J]. Communications of the ACM, 2021, 64(9): 99-106.
[89] ZHONG Wanjun, CUI Ruixiang, GUO Yiduo, et al. AGIEval: a human-centric benchmark for evaluating foundation models[EB/OL]. (2023-09-18) [2024-06-20]. https://arxiv.org/abs/2304.06364.
[90] LI Haonan, ZHANG Yixuan, KOTO F, et al. CMMLU: measuring massive multitask language understanding in Chinese[EB/OL]. (2024-01-17) [2024-06-20]. https://arxiv.org/abs/2306.09212.
[91] ZHANG Xiaotian, LI Chunyang, ZONG Yi, et al. Evaluating the performance of large language models on GAOKAO benchmark[EB/OL]. (2024-02-24) [2024-06-20]. https://arxiv.org/abs/2305.12474.
[92] ZHANG W, ALJUNIED M, GAO C, et al. M3Exam: a multilingual, multimodal, multilevel benchmark for examining large language models[J]. Advances in neural information processing systems, 2023, 36: 5484-5505.
[93] COBBE K, KOSARAJU V, BAVARIAN M, et al. Training verifiers to solve math word problems[EB/OL]. (2021-11-18) [2024-06-20]. https://arxiv.org/abs/2110.14168.
[94] HENDRYCKS D, BURNS C, KADAVATH S, et al. Measuring mathematical problem solving with the MATH dataset[EB/OL]. (2021-11-08) [2024-06-20]. https://arxiv.org/abs/2103.03874.
[95] HU J, RUDER S, SIDDHANT A, et al. XTREME: a massively multilingual multi-task benchmark for evaluating cross-lingual generalisation[C]//International Conference on Machine Learning. Virtual Event: PMLR, 2020: 4411-4421.
[96] WANG A, SINGH A, MICHAEL J, et al. GLUE: a multi-task benchmark and analysis platform for natural language understanding[EB/OL]. (2018-09-18) [2024-06-20]. https://arxiv.org/abs/1804.07461.
[97] MARCUS G. The next decade in AI: four steps towards robust artificial intelligence[EB/OL]. (2020-02-19)[2024-06-20]. https://arxiv.org/abs/2002.06177.
[98] BENDER E M, KOLLER A. Climbing towards NLU: on meaning, form, and understanding in the age of data[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: ACL, 2020: 5185-5198.
[99] ZELLERS R, HOLTZMAN A, RASHKIN H, et al. Defending against neural fake news[EB/OL]. (2019-05-29)[2024-06-20]. https://arxiv.org/abs/1905.12616v3.
[100] SCHRAMOWSKI P, TURAN C, ANDERSEN N, et al. Large pre-trained language models contain human-like biases of what is right and wrong to do[J]. Nature machine intelligence, 2022, 4: 258-268.
[101] BENDER E M, GEBRU T, MCMILLAN-MAJOR A, et al. On the dangers of stochastic parrots[C]//Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Virtual Event: ACM, 2021: 610-623.
[102] RIEKE N, HANCOX J, LI Wenqi, et al. The future of digital health with federated learning[J]. NPJ digital medicine, 2020, 3: 119.
[103] CHEN Yao, GAN Wensheng, WU Yongdong, et al. Privacy-preserving federated mining of frequent itemsets[J]. Information sciences, 2023, 625: 504-520.
[104] STRUBELL E, GANESH A, MCCALLUM A. Energy and policy considerations for modern deep learning research[J]. Proceedings of the AAAI conference on artificial intelligence, 2020, 34(9): 13693-13696.
[105] ZHANG Chen, XIE Yu, BAI Hang, et al. A survey on federated learning[J]. Knowledge-based systems, 2021, 216: 106775.
相似文献/References:
[1]李德毅.网络时代人工智能研究与发展[J].智能系统学报,2009,4(1):1.
 LI De-yi.AI research and development in the network age[J].CAAI Transactions on Intelligent Systems,2009,4():1.
[2]赵克勤.二元联系数A+Bi的理论基础与基本算法及在人工智能中的应用[J].智能系统学报,2008,3(6):476.
 ZHAO Ke-qin.The theoretical basis and basic algorithm of binary connection A+Bi and its application in AI[J].CAAI Transactions on Intelligent Systems,2008,3():476.
[3]徐玉如,庞永杰,甘?? 永,等.智能水下机器人技术展望[J].智能系统学报,2006,1(1):9.
 XU Yu-ru,PANG Yong-jie,GAN Yong,et al.AUV—state-of-the-art and prospect[J].CAAI Transactions on Intelligent Systems,2006,1():9.
[4]王志良.人工心理与人工情感[J].智能系统学报,2006,1(1):38.
 WANG Zhi-liang.Artificial psychology and artificial emotion[J].CAAI Transactions on Intelligent Systems,2006,1():38.
[5]赵克勤.集对分析的不确定性系统理论在AI中的应用[J].智能系统学报,2006,1(2):16.
 ZHAO Ke-qin.The application of uncertainty systems theory of set pair analysis (SPU)in the artificial intelligence[J].CAAI Transactions on Intelligent Systems,2006,1():16.
[6]秦裕林,朱新民,朱? 丹.Herbert Simon在最后几年里的两个研究方向[J].智能系统学报,2006,1(2):11.
 QIN Yu-lin,ZHU Xin-min,ZHU Dan.Herbert Simons two research directions in his lost years[J].CAAI Transactions on Intelligent Systems,2006,1():11.
[7]谷文祥,李 丽,李丹丹.规划识别的研究及其应用[J].智能系统学报,2007,2(1):1.
 GU Wen-xiang,LI Li,LI Dan-dan.Research and application of plan recognition[J].CAAI Transactions on Intelligent Systems,2007,2():1.
[8]杨春燕,蔡 文.可拓信息-知识-智能形式化体系研究[J].智能系统学报,2007,2(3):8.
 YANG Chun-yan,CAI Wen.A formalized system of extension information-knowledge-intelligence[J].CAAI Transactions on Intelligent Systems,2007,2():8.
[9]赵克勤.SPA的同异反系统理论在人工智能研究中的应用[J].智能系统学报,2007,2(5):20.
 ZHAO Ke-qin.The application of SPAbased identicaldiscrepancycontrary system theory in artificial intelligence research[J].CAAI Transactions on Intelligent Systems,2007,2():20.
[10]王志良,杨?? 溢,杨?? 扬,等.一种周期时变马尔可夫室内位置预测模型[J].智能系统学报,2009,4(6):521.[doi:10.3969/j.issn.1673-4785.2009.06.009]
 WANG Zhi-liang,YANG Yi,YANG Yang,et al.A periodic time-varying Markov model for indoor location prediction[J].CAAI Transactions on Intelligent Systems,2009,4():521.[doi:10.3969/j.issn.1673-4785.2009.06.009]

备注/Memo

收稿日期:2024-6-24。
基金项目:国家自然科学基金项目(61603257).
作者简介:肖建力,副教授,吴文俊人工智能科学技术奖获得者,中国计算机学会杰出会员,中国自动化学会高级会员,中国人工智能学会会员,电气电子工程师学会(IEEE)高级会员,美国计算机协会(ACM)高级会员,主要研究方向为人工智能与大数据,著有图书《人工智能怎么学》。E-mail:audyxiao@sjtu.edu.cn。;黄星宇,硕士研究生,主要研究方向为智慧教育。E-mail:233350741@st.usst.edu.cn。;姜飞,副研究员,主要研究方向为智能教学。E-mail: fjiang@sjtu.edu.cn。
通讯作者:肖建力. E-mail:audyxiao@sjtu.edu.cn

更新日期/Last Update: 2025-09-05
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com