[1]吴国栋,秦辉,胡全兴,等.大语言模型及其个性化推荐研究[J].智能系统学报,2024,19(6):1351-1365.[doi:10.11992/tis.202309036]
 WU Guodong,QIN Hui,HU Quanxing,et al.Research on large language models and personalized recommendation[J].CAAI Transactions on Intelligent Systems,2024,19(6):1351-1365.[doi:10.11992/tis.202309036]
点击复制

大语言模型及其个性化推荐研究

参考文献/References:
[1] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[EB/OL]. (2017-06-12)[2023-08-25]. http://arxiv.org/abs/1706.03762.
[2] BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[EB/OL]. (2020-05-28) [2023-08-25]. http://arxiv.org/abs/2005.14165.
[3] HOU Yupeng, ZHANG Junjie, LIN Zihan, et al. Large language models are zero-shot rankers for recommender systems[EB/OL]. (2023-05-15)[2023-08-25]. http://arxiv.org/abs/2305.08845.
[4] WANG Lei, LIM E P. Zero-shot next-item recommendation using large pretrained language models[EB/OL]. (2023-04-06)[2023-08-25]. http://arxiv.org/abs/2304.03153.
[5] KOJIMA T, GU S S, REID M, et al. Large language models are zero-shot reasoners[EB/OL]. (2022-05-24)[2023-08-25]. http://arxiv.org/abs/2205.11916.
[6] HE Xiangnan, LIAO Lizi, ZHANG Hanwang, et al. Neural collaborative filtering[C]//Proceedings of the 26th International Conference on World Wide Web. [S.l.: s.n.], 2017: 173–182.
[7] 黄璐, 林川杰, 何军, 等. 融合主题模型和协同过滤的多样化移动应用推荐[J]. 软件学报, 2017, 28(3): 708-720.
HUANG Lu, LIN Chuanjie, HE Jun, et al. Diversified mobile app recommendation combining topic model and collaborative filtering[J]. Journal of software, 2017, 28(3): 708-720.
[8] SON J, KIM S B. Content-based filtering for recommendation systems using multiattribute networks[J]. Expert systems with applications, 2017, 89: 404-412.
[9] 刘建勋, 石敏, 周栋, 等. 基于主题模型的Mashup标签推荐方法[J]. 计算机学报, 2017, 40(2): 520-534.
LIU Jianxun, SHI Min, ZHOU Dong, et al. Topic model based tag recommendation method for mashups[J]. Chinese journal of computers, 2017, 40(2): 520-534.
[10] BASILICO J, HOFMANN T. Unifying collaborative and content-based filtering[C]//Twenty-first international conference on Machine learning. Banff: ACM, 2004: 65-72.
[11] 曹俊豪, 李泽河, 江龙, 等. 一种融合协同过滤和用户属性过滤的混合推荐算法[J]. 电子设计工程, 2018, 26(9): 60-63.
CAO Junhao, LI Zehe, JIANG Long, et al. A hybrid recommendation algorithm based on collaborative filtering and user attribute filtering[J]. Electronic design engineering, 2018, 26(9): 60-63.
[12] 孙冬婷, 何涛, 张福海. 推荐系统中的冷启动问题研究综述[J]. 计算机与现代化, 2012(5): 59-63.
SUN Dongting, HE Tao, ZHANG Fuhai. Survey of cold-start problem in collaborative filtering recommender system[J]. Computer and modernization, 2012(5): 59-63.
[13] DA’U A, SALIM N. Recommendation system based on deep learning methods: a systematic review and new directions[J]. Artificial intelligence review, 2020, 53(4): 2709-2748.
[14] ZHANG Jiawei. Graph-ToolFormer: to empower LLMs with graph reasoning ability via prompt augmented by ChatGPT[EB/OL]. (2023-04-10)[2023-08-25]. http://arxiv.org/abs/2304.11116.
[15] PAPARRIZOS I, CAMBAZOGLU B B, GIONIS A. Machine learned job recommendation[C]//Proceedings of the fifth ACM conference on Recommender systems. Chicago: ACM, 2011: 325-328.
[16] LIU Jiahui, DOLAN P, PEDERSEN E R. Personalized news recommendation based on click behavior[C]//Proceedings of the 15th international conference on Intelligent user interfaces. Hong Kong: ACM, 2010: 31-40.
[17] DAI Sunhao, SHAO Ninglu, ZHAO Haiyuan, et al. Uncovering ChatGPT’s capabilities in recommender systems[EB/OL]. (2023-05-03)[2023-08-25]. http://arxiv.org/abs/2305.02182.
[18] WANG Lei, ZHANG Jingsen, CHEN Xu, et al. User behavior simulation with large language model based agents[EB/OL]. (2023-06-05)[2023-08-25]. https://arxiv.org/abs/2306.02552.
[19] QIU Zhaopeng, WU Xian, GAO Jingyue, et al. U-BERT: pre-training user representations for improved recommendation[J]. Proceedings of the AAAI conference on artificial intelligence, 2021, 35(5): 4320-4327.
[20] SUN Fei, LIU Jun, WU Jian, et al. BERT4Rec: sequential recommendation with bidirectional encoder representations from transformer[C]//Proceedings of the 28th ACM International Conference on Information and Knowledge Management. Beijing: ACM, 2019: 1441-1450.
[21] COLIN R, NOAM S, ADAM R, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of machine learning research, 2020, 21(1): 5485-5551.
[22] LI Jiacheng, WANG Ming, LI Jin, et al. Text is all you need: learning language representations for sequential recommendation[EB/OL]. (2023-05-23)[2023-08-25]. http://arxiv.org/abs/2305.13731.
[23] CUI Zeyu, MA Jianxin, ZHOU Chang, et al. M6-rec: generative pretrained language models are open-ended recommender systems[EB/OL]. (2022-05-17) [2023-08-25]. http://arxiv.org/abs/2205.08084.
[24] KOREN Y, BELL R, VOLINSKY C. Matrix factorization techniques for recommender systems[J]. Computer, 2009, 42(8): 30-37.
[25] ZHOU Guorui, ZHU Xiaoqiang, SONG Chenru, et al. Deep interest network for click-through rate prediction[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. London: ACM, 2018: 1059–1068.
[26] FRIEDMAN L, AHUJA S, ALLEN D, et al. Leveraging large language models in conversational recommender systems[EB/OL]. (2023-05-13)[2023-08-25]. http://arxiv.org/abs/2305.07961.
[27] LIN Guo, ZHANG Yongfeng. Sparks of artificial general recommender (AGR): early experiments with ChatGPT[EB/OL]. (2017-05-08)[2023-08-25]. http://arxiv.org/abs/2305.04518.
[28] XI Yunjia, LIU Weiwen, LIN Jianghao, et al. Towards open-world recommendation with knowledge augmentation from large language models[EB/OL]. (2023-06-19) [2023-08-25]. http://arxiv.org/abs/2306.10933.
[29] LI Jinming, ZHANG Wentao, WANG Tian, et al. GPT4Rec: a generative framework for personalized recommendation and user interests interpretation[EB/OL]. (2023-04-08)[2023-04-25]. http://arxiv.org/abs/2304.03879.
[30] DU Yingpeng, LUO Di, YAN Rui, et al. Enhancing job recommendation through LLM-based generative adversarial networks[EB/OL]. (2023-07-20)[2023-08-25]. http://arxiv.org/abs/2307.10747.
[31] WANG Wenjie, LIN Xinyu, FENG Fuli, et al. Generative recommendation: towards next-generation recommender paradigm[EB/OL]. (2023-04-07)[2023-08-25]. http://arxiv.org/abs/2304.03516.
[32] GENG Shijie, LIU Shuchang, FU Zuohui, et al. Recommendation as language processing (RLP): a unified pretrain, personalized prompt & predict paradigm (P5)[C]//Proceedings of the 16th ACM Conference on Recommender Systems. Seattle: ACM, 2022: 299-315.
[33] JI Jianchao, LI Zelong, XU Shuyuan, et al. GenRec: large language model for Generative recommendation[M]//Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, 2024: 494-502.
[34] ZHANG Zizhuo, WANG Bang. Prompt learning for news recommendation[EB/OL]. (2023-04-11)[2023-08-25]. http://arxiv.org/abs/2304.05263.
[35] OKAMOTO K, CHEN Wei, LI Xiangyang. Ranking of closeness centrality for large-scale social networks[C]//PREPARATA FP, WU X, YIN J. International Workshop on Frontiers in Algorithmics. Berlin: Springer, 2008: 186-195.
[36] ZHANG Junlong, LUO Yu. Degree centrality, betweenness centrality, and closeness centrality in social network[C]//Proceedings of the 2017 2nd International Conference on Modelling, Simulation and Applied Mathematics. Paris: Atlantis Press, 2017: 300-303.
[37] NEWMAN M E J. A measure of betweenness centrality based on random walks[J]. Social networks, 2005, 27(1): 39-54.
[38] HUANG Xiao, ZHANG Jingyuan, LI Dingcheng, et al. Knowledge graph embedding based question answering[C]//Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. Melbourne: ACM, 2019: 105-113.
[39] ZHANG Yuyu, DAI Hanjun, KOZAREVA Z, et al. Variational reasoning for question answering with knowledge graph[EB/OL]. (2017-11-27)[2023-01-01]. http://arxiv.org/abs/1709.04071.
[40] RONG Yu, HUANG Wenbing, XU Tingyang, et al. DropEdge: towards deep graph convolutional networks on node classification[EB/OL]. (2019-07-25)[2023-08-25]. http://arxiv.org/abs/1907.10903.
[41] ERRICA F, PODDA M, BACCIU D, et al. A fair comparison of graph neural networks for graph classification[EB/OL]. (2019-12-20)[2023-08-25]. http://arxiv.org/abs/1912.09893.
[42] WANG Heng, FENG Shangbin, HE Tianxing, et al. Can language models solve graph problems in natural language?[EB/OL]. (2023-05-17)[2023-08-25]. http://arxiv.org/abs/2305.10037.
[43] XIE Han, ZHENG Da, MA Jun, et al. Graph-aware language model pre-training on a large graph corpus can help multiple graph applications [EB/OL]. (2023-06-05) [2023-08-25]. http://arxiv.org/abs/2306.02592.
[44] GUO Jiayan, DU Lun, LIU Hengyu, et al. GPT4Graph: can large language models understand graph structured data? an empirical evaluation and benchmarking [EB/OL]. (2023-05-24) [2023-08-25]. http://arxiv.org/abs/2305.15066.
[45] WU Likang, QIU Zhaopeng, ZHENG Zhi, et al. Exploring large language model for graph data understanding in online job recommendations[EB/OL]. (2023-07-10)[2023-08-25]. http://arxiv.org/abs/2307.05722.
[46] ZHU Deyao, CHEN Jun, SHEN Xiaoqian, et al. MiniGPT-4: enhancing vision-language understanding with advanced large language models [EB/OL]. (2023-04-20) [2023-08-25]. http://arxiv.org/abs/2304.10592.
[47] LI Junnan, LI Dongxu, SAVARESE S, et al. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models [EB/OL]. (2023-01-30) [2023-08-25]. http://arxiv.org/abs/2301.12597.
[48] BAO Keqin, ZHANG Jizhi, ZHANG Yang, et al. TALLRec: an effective and efficient tuning framework to align large language model with recommendation [EB/OL]. (2023-04-30) [2023-08-25]. http://arxiv.org/abs/2305.00447.
[49] ZHANG Junjie, XIE Ruobing, HOU Yupeng, et al. Recommendation as instruction following: a large language model empowered recommendation approach [EB/OL]. (2023-05-11) [2023-08-25]. http://arxiv.org/abs/2305.07001.
[50] KANG Wangcheng, NI Jianmo, MEHTA N, et al. Do LLMs understand user preferences? evaluating LLMs on user rating prediction [EB/OL]. (2023-05-10) [2023-08-25]. http://arxiv.org/abs/2305.06474.
[51] YONA G, HONOVICH O, LAISH I, et al. Surfacing biases in large language models using contrastive input decoding[EB/OL]. (2023-05-12)[2023-08-25]. http://arxiv.org/abs/2305.07378.
[52] CHEN Zhipeng, ZHOU Kun, ZHANG Beichen, et al. ChatCoT: tool-augmented chain-of-thought reasoning on chat-based large language models[EB/OL]. (2023-05-23) [2023-08-25]. http://arxiv.org/abs/2305.14323.
[53] OUYANG Long, WU J, XU Jiang, et al. Training language models to follow instructions with human feedback[EB/OL]. (2022-03-04)[2023-08-25]. http://arxiv.org/abs/2203.02155.
[54] HUA Wenyue, GE Yingqiang, XU Shuyuan, et al. UP5: unbiased foundation model for fairness-aware recommendation[EB/OL]. (2023-05-20)[2023-08-25]. http://arxiv.org/abs/2305.12090.
[55] ZHANG Jizhi, BAO Keqin, ZHANG Yang, et al. Is ChatGPT fair for recommendation? evaluating fairness in large language model recommendation[EB/OL]. (2023-05-12) [2023-08-25]. http://arxiv.org/abs/2305.07609.
[56] AI Qingyao, BAI Ting, CAO Zhao, et al. Information retrieval meets large language models: a strategic report from chinese IR community[J]. AI open, 2023, 4: 80-90.
[57] ZHAO W X, ZHOU Kun, LI Junyi, et al. A survey of large language models[EB/OL]. (2023-03-21)[2023-08-25]. http://arxiv.org/abs/2303.18223.
[58] BILLS S, CAMMARATA N, MOSSING D, et al. Language models can explain neurons in language models[EB/OL]. (2023-05-09)[2024-03-25]. https://openaipublic.blob. core. windows. net/neuron-explainer/paper/index.
[59] WEI Wei, REN Xubin, TANG Jiabin, et al. LLMRec: large language models with graph augmentation for recommendation[EB/OL]. (2023-11-01)[2024-03-25]. http://arxiv.org/abs/2311.00423.
相似文献/References:
[1]李慧,马小平,胡云,等.融合上下文信息的社会网络推荐系统[J].智能系统学报,2015,10(2):293.[doi:10.3969/j.issn.1673-4785.201406017]
 LI Hui,MA Xiaoping,HU Yun,et al.Social network recommendaton system mixing contex information[J].CAAI Transactions on Intelligent Systems,2015,10():293.[doi:10.3969/j.issn.1673-4785.201406017]
[2]王大玲,冯时,张一飞,等.社会媒体多模态、多层次资源推荐技术研究[J].智能系统学报,2014,9(3):265.[doi:10.3969/j.issn.1673-4785.201403068]
 WANG Daling,FENG Shi,ZHANG Yifei,et al.Study on the recommendations of multi-modal and multi-level resources in social media[J].CAAI Transactions on Intelligent Systems,2014,9():265.[doi:10.3969/j.issn.1673-4785.201403068]
[3]顾军华,谢志坚,武君艳,等.基于图游走的并行协同过滤推荐算法[J].智能系统学报,2019,14(4):743.[doi:10.11992/tis.201806002]
 GU Junhua,XIE Zhijian,WU Junyan,et al.Parallel collaborative filtering recommendation algorithm based on graph walk[J].CAAI Transactions on Intelligent Systems,2019,14():743.[doi:10.11992/tis.201806002]
[4]黄河燕,李思霖,兰天伟,等.大语言模型安全性:分类、评估、归因、缓解、展望[J].智能系统学报,2025,20(1):2.[doi:10.11992/tis.202401006]
 HUANG Heyan,LI Silin,LAN Tianwei,et al.A survey on the safety of large language model: classification, evaluation, attribution, mitigation and prospect[J].CAAI Transactions on Intelligent Systems,2025,20():2.[doi:10.11992/tis.202401006]

备注/Memo

收稿日期:2023-9-21。
基金项目:国家自然科学基金项目(32371993);安徽省自然科学基金项目(2108085MF209);安徽省科技重大专项(202103b06020013).
作者简介:吴国栋,副教授,主要研究方向为深度学习、推荐系统。主持安徽省自然科学研究重点项目 1 项、一般项目 1 项、安徽省科技攻关重点项目 1 项。发表学术论文30 余篇。E-mail:wugd@ahau.edu.cn;秦辉,硕士研究生,主要研究方向为推荐系统。E-mail:2504864202@qq.com;胡全兴,硕士研究生,主要研究方向区块链可信推荐。E-mail:1763273299@qq.ocm。
通讯作者:吴国栋. E-mail:gdwu1120@qq.com

更新日期/Last Update: 2024-11-05
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com