[1]陈培,景丽萍.融合语义信息的矩阵分解词向量学习模型[J].智能系统学报,2017,12(05):661-667.[doi:10.11992/tis.201706012]
 CHEN Pei,JING Liping.Word representation learning model using matrix factorization to incorporate semantic information[J].CAAI Transactions on Intelligent Systems,2017,12(05):661-667.[doi:10.11992/tis.201706012]
点击复制

融合语义信息的矩阵分解词向量学习模型(/HTML)
分享到:

《智能系统学报》[ISSN:1673-4785/CN:23-1538/TP]

卷:
第12卷
期数:
2017年05期
页码:
661-667
栏目:
出版日期:
2017-10-25

文章信息/Info

Title:
Word representation learning model using matrix factorization to incorporate semantic information
作者:
陈培 景丽萍
北京交通大学 交通数据分析与挖掘北京市重点实验室, 北京 100044
Author(s):
CHEN Pei JING Liping
Beijing Key Lab of Traffic Data Analysis and Mining, Beijing Jiaotong University, Beijing 100044, China
关键词:
自然语言处理词向量矩阵分解语义信息知识库
Keywords:
natural language processingword representationmatrix factorizationsemantic informationknowledge base
分类号:
TP391
DOI:
10.11992/tis.201706012
摘要:
词向量在自然语言处理中起着重要的作用,近年来受到越来越多研究者的关注。然而,传统词向量学习方法往往依赖于大量未经标注的文本语料库,却忽略了单词的语义信息如单词间的语义关系。为了充分利用已有领域知识库(包含丰富的词语义信息),文中提出一种融合语义信息的词向量学习方法(KbEMF),该方法在矩阵分解学习词向量的模型上加入领域知识约束项,使得拥有强语义关系的词对获得的词向量相对近似。在实际数据上进行的单词类比推理任务和单词相似度量任务结果表明,KbEMF比已有模型具有明显的性能提升。
Abstract:
Word representation plays an important role in natural language processing and has attracted a great deal of attention from many researchers due to its simplicity and effectiveness. However, traditional methods for learning word representations generally rely on a large amount of unlabeled training data, while neglecting the semantic information of words, such as the semantic relationship between words. To sufficiently utilize knowledge bases that contain rich semantic word information in existing fields, in this paper, we propose a word representation learning method that incorporates semantic information (KbEMF). In this method, we use matrix factorization to incorporate field knowledge constraint items into a learning word representation model, which identifies words with strong semantic relationships as being relatively approximate to the obtained word representations. The results of word analogy reasoning tasks and word similarity measurement tasks obtained using actual data show the performance of KbEMF to be superior to that of existing models.

参考文献/References:

[1] TURIAN J, RATINOV L, BENGIO Y. Word representations:a simple and general method for semi-supervised learning[C]//Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, Sweden, 2010:384-394.
[2] LIU Y, LIU Z, CHUA T S, et al. Topical word embeddings[C]//Association for the Advancement of Artificial Intelligence. Austin Texas, USA, 2015:2418-2424.
[3] MAAS A L, DALY R E, PHAM P T, et al. Learning word vectors for sentiment analysis[C]//Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Portland Oregon, USA, 2011:142-150.
[4] DHILLON P, FOSTER D P, UNGAR L H. Multi-view learning of word embeddings via cca[C]//Advances in Neural Information Processing Systems. Granada, Spain,2011:199-207.
[5] BANSAL M, GIMPEL K, LIVESCU K. Tailoring continuous word representations for dependency parsing[C]//Meeting of the Association for Computational Linguistics. Baltimore Maryland, USA, 2014:809-815.
[6] HUANG E H, SOCHER R, MANNING C D, et al. Improving word representations via global context and multiple word prototypes[C]//Meeting of the Association for Computational Linguistics. Jeju Island, Korea, 2012:873-882.
[7] MNIH A, HINTON G. Three new graphical models for statistical language modelling[C]//Proceedings of the 24th International Conference on Machine Learning. New York, USA, 2007:641-648.
[8] MNIH A, HINTON G. A scalable hierarchical distributed language model[C]//Advances in Neural Information Processing Systems. Vancouver, Canada, 2008:1081-1088.
[9] BENGIO Y, DUCHARME R, VINCENT P, et al. A neural probabilistic language model[J]. Journal of machine learning research, 2003, 3(02):1137-1155.
[10] COLLOBERT R, WESTON J, BOTTOU L, et al. Natural language processing (almost) from scratch[J]. Journal of machine learning research, 2011, 12(8):2493-2537.
[11] MIKOLOV T, CHEN K, CORRADO G, ET AL. Efficient estimation of word representations in vector space[C]//International Conference on Learning Representations. Scottsdale, USA,2013.
[12] BAIN J, Gao B, Liu T Y. Knowledge-powered deep learning for word embedding[C]//Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, Berlin, Heidelberg, 2014:132-148.
[13] LI Y, XU L, TIAN F, ET AL. Word embedding revisited:a new representation learning and explicit matrix factorization perspective[C]//International Conference on Artificial Intelligence. Buenos Aires, Argentina, 2015:3650-3656.
[14] LEVY O, GOLDBERG Y. Neural word embedding as implicit matrix factorization[C]//Advances in Neural Information Processing Systems. Montreal Quebec, Canada, 2014:2177-2185.
[15] PENNINGTON J, SOCHER R, MANNING C. Glove:global vectors for word representation[C]//Conference on Empirical Methods in Natural Language Processing. Doha, Qatar, 2014:1532-1543.
[16] BIAN J, GAO B, LIU T Y. Knowledge-powered deep learning for word embedding[C]//Joint European Conference on Machine Learning and Knowledge Discovery in Databases. Berlin, Germany, 2014:132-148.
[17] XU C, BAI Y, BIAN J, et al. Rc-net:a general framework for incorporating knowledge into word representations[C]//Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. Shanghai, China,2014:1219-1228.
[18] YU M, DREDZE M. Improving lexical embeddings with semantic knowledge[C]//Meeting of the Association for Computational Linguistics. Baltimore Maryland, USA,2014:545-550.
[19] LIU Q, JIANG H, WEI S, et al. Learning semantic word embeddings based on ordinal knowledge constraints[C]//The 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference of the Asian Federation of Natural Language Processing. Beijing, China, 2015:1501-1511.
[20] FARUQUI M, DODGE J, JAUHAR S K, et al. Retrofitting word vectors to semantic lexicons[C]//The 2015 Conference of the North American Chapter of the Association for Computational Linguistics. Colorado, USA, 2015:1606-1615.
[21] LEE D D, SEUNG H S. Algorithms for non-negative matrix factorization[C]//Advances in Neural Information Processing Systems.Vancouver, Canada, 2001:556-562.
[22] MNIH A, SALAKHUTDINOV R. Probabilistic matrix factorization[C]//Advances in Neural Information Processing Systems. Vancouver, Canada, 2008:1257-1264.
[23] SREBRO N, RENNIE J D M, JAAKKOLA T. Maximum-margin matrix factorization[J]. Advances in neural information processing systems, 2004, 37(2):1329-1336.
[24] LUONG T, SOCHER R, MANNING C D. Better word representations with recursive neural networks for morphology[C]//Seventeenth Conference on Computational Natural Language Learning. Sofia, Bulgaria,2013:104-113.
[25] FINKELSTEIN R L. Placing search in context:the concept revisited[J]. ACM transactions on information systems, 2002, 20(1):116-131.

相似文献/References:

[1]李 蕾,周延泉,钟义信.基于语用的自然语言处理研究与应用初探[J].智能系统学报,2006,1(02):1.
 LI Lei,ZHOU Yan-quan,ZHONG Yi-xin.Pragmatic Information Based NLP Research and Application[J].CAAI Transactions on Intelligent Systems,2006,1(05):1.
[2]李德毅.AI——人类社会发展的加速器[J].智能系统学报,2017,12(05):583.[doi:10.11992/tis.201710016]
 LI Deyi.Artificial intelligence:an accelerator for the development of human society[J].CAAI Transactions on Intelligent Systems,2017,12(05):583.[doi:10.11992/tis.201710016]
[3]张森,张晨,林培光,等.基于用户查询日志的网络搜索主题分析[J].智能系统学报,2017,12(05):668.[doi:10.11992/tis.201706096]
 ZHANG Sen,ZHANG Chen,LIN Peiguang,et al.Web search topic analysis based on user search query logs[J].CAAI Transactions on Intelligent Systems,2017,12(05):668.[doi:10.11992/tis.201706096]
[4]曲昭伟,吴春叶,王晓茹.半监督自训练的方面提取[J].智能系统学报,2019,14(04):635.[doi:10.11992/tis.201806006]
 QU Zhaowei,WU Chunye,WANG Xiaoru.Aspects extraction based on semi-supervised self-training[J].CAAI Transactions on Intelligent Systems,2019,14(05):635.[doi:10.11992/tis.201806006]
[5]张潇鲲,刘琰,陈静.引入外部词向量的文本信息网络表示学习[J].智能系统学报,2019,14(05):1056.[doi:10.11992/tis.201809037]
 ZHANG Xiaokun,LIU Yan,CHEN Jing.Representation learning using network embedding based on external word vectors[J].CAAI Transactions on Intelligent Systems,2019,14(05):1056.[doi:10.11992/tis.201809037]
[6]王一成,万福成,马宁.融合多层次特征的中文语义角色标注[J].智能系统学报,2020,15(1):107.[doi:10.11992/tis.201910012]
 WANG Yicheng,WAN Fucheng,MA Ning.Chinese semantic role labeling with multi-level linguistic features[J].CAAI Transactions on Intelligent Systems,2020,15(05):107.[doi:10.11992/tis.201910012]

备注/Memo

备注/Memo:
收稿日期:2017-06-06。
基金项目:国家自然科学基金项目(61370129,61375062,61632004);长江学者和创新团队发展计划资助项目(IRT201206).
作者简介:陈培,女,1990年生,硕士研究生,主要研究方向为自然语言处理、情感分析;景丽萍,女,1978年生,教授,博士,主要研究方向为数据挖掘、文本挖掘、生物信息学、企业智能。
通讯作者:景丽萍.E-mail:lpjing@bjtu.edu.cn
更新日期/Last Update: 2017-10-25