[1]张潇鲲,刘琰,陈静.引入外部词向量的文本信息网络表示学习[J].智能系统学报,2019,14(05):1056-1063.[doi:10.11992/tis.201809037]
 ZHANG Xiaokun,LIU Yan,CHEN Jing.Representation learning using network embedding based on external word vectors[J].CAAI Transactions on Intelligent Systems,2019,14(05):1056-1063.[doi:10.11992/tis.201809037]
点击复制

引入外部词向量的文本信息网络表示学习(/HTML)
分享到:

《智能系统学报》[ISSN:1673-4785/CN:23-1538/TP]

卷:
第14卷
期数:
2019年05期
页码:
1056-1063
栏目:
出版日期:
2019-09-05

文章信息/Info

Title:
Representation learning using network embedding based on external word vectors
作者:
张潇鲲 刘琰 陈静
数学工程与先进计算国家重点实验室, 河南 郑州 450000
Author(s):
ZHANG Xiaokun LIU Yan CHEN Jing
Mathematical Engineering and Advanced Computing State Key Laboratory, Zhengzhou 450000
关键词:
网络表示学习文本信息网络自编码器外部词向量节点分类词向量分布式表示表示学习
Keywords:
network embeddingcontent information networkauto-encoderexternal word vectorsvertex classificationword vectorsdistributed representationrepresentation learning
分类号:
TP181
DOI:
10.11992/tis.201809037
摘要:
针对信息网络(text-based information network)现有研究多基于网络自身信息建模,受限于任务语料规模,只使用任务相关文本进行建模容易产生语义漂移或语义残缺的问题,本文将外部语料引入建模过程中,利用外部语料得到的词向量对建模过程进行优化,提出基于外部词向量的网络表示模型NE-EWV(network embedding based on external word vectors),从语义特征空间以及结构特征空间两个角度学习特征融合的网络表示。通过实验,在现实网络数据集中对模型有效性进行了验证。实验结果表明,在链接预测任务中的AUC指标,相比只考虑结构特征的模型提升7%~19%,相比考虑结构与文本特征的模型在大部分情况下有1%~12%提升;在节点分类任务中,与基线方法中性能最好的CANE性能相当。证明引入外部词向量作为外部知识能够有效提升网络表示能力。
Abstract:
Network embedding, which preserves a network’s sophisticated features, can effectively learn the low-dimensional embedding of vertices in order to lower the computing and storage costs. Content information networks (such as Twitter), which contain rich text information, are commonly used in daily life. Most studies on content information network are based on the information of the network itself. Distributed word vectors are becoming increasingly popular in natural language processing tasks. As a low-dimensional representation of the semantic feature space, word vectors can preserve syntactic and semantic regularities. By introducing external word vectors into the modeling process, we can use the external syntactic and semantic features. Hence, in this paper, we propose network embedding based on external word vectors (NE-EWV), whereby the feature fusion representation is learned from both semantic feature space as well as structural feature space. Empirical experiments were conducted using real-world content information network datasets to validate the effectiveness of the model. The results show that in link prediction task, the AUC of the model was 7% to 19% higher than that of the model that considers only the structural features, and in most cases was 1% to 12% higher than the model that considers structural and text features. In node classification tasks, the performance is comparable with that of context-aware network embedding (CANE), which was the state-of-the-art baseline model.

参考文献/References:

[1] CUI Peng, WANG Xiao, PEI Jian, et al. A survey on network embedding[J]. IEEE transactions on knowledge and data engineering, 2019, 31(5):833-852.
[2] LIBEN NOWELL D, KLEINBERG J. The link prediction problem for social networks[J]. Journal of the American society for information science and technology, 2007, 58(7):1019-1031.
[3] LANCICHINETTI A, FORTUNATO S. Community detection algorithms:a comparative analysis[J]. Physical review E, 2009, 80:056117.
[4] BHAGAT S, CORMODE G, MUTHUKRISHNAN S. Node classification in social networks[M]//AGGARWAL C C. Social Network Data Analytics. Boston, MA:Springer, 2011:115-148.
[5] DONG Xin, HALEVY A, MADHAVAN J, et al. Similarity search for web services[C]//Proceedings of the Thirtieth International Conference on Very Large Data Bases. Toronto, Canada, 2004:372-383.
[6] BASTIAN M, HEYMANN S, JACOMY M. Gephi:an open source software for exploring and manipulating networks[C]//Proceedings of International AAAI Conference on Weblogs and Social Media. San Jose, California, USA, 2009:361-362.
[7] PEROZZI B, AL-RFOU R, SKIENA S. Deepwalk:online learning of social representations[C]//Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, USA, 2014:701-710.
[8] TANG Jian, QU Meng, WANG Mingzhe, et al. LINE:large-scale information network embedding[C]//Proceedings of the 24th International Conference on World Wide Web. Florence, Italy, 2005:1067-1077.
[9] GROVER A, LESKOVEC J. node2vec:scalable feature learning for networks[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, California, USA, 2016:855-864.
[10] YANG Cheng, LIU Zhiyuan, ZHAO Deli, et al. Network representation learning with rich text information[C]//Proceedings of the 24th International Conference on Artificial Intelligence. Buenos Aires, Argentina, 2015:2111-2117.
[11] TU Cuchao, LIU Han, LIU Zhiyuan, et al. Cane:context-aware network embedding for relation modeling[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada, 2007:1722-1731.
[12] TANG Lei, LIU Huan. Scalable learning of collective behavior based on sparse social dimensions[C]//Proceedings of the 18th ACM Conference on Information and Knowledge Management. Hong Kong, China, 2009:1107-1116.
[13] BALASUBRAMANIAN M, SCHWARTZ E L, TENENBAUM J B, et al. The isomap algorithm and topological stability[J]. Science, 2002, 295(5552):7-7.
[14] MIKOLOV T, SUTSKEVER I, CHEN K, et al. Distributed representations of words and phrases and their compositionality[C]//Proceedings of the 26th International Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, 2013:3111-3119.
[15] RECHT B, RÉ C, WRIGHT S J, et al. Hogwild:a lock-free approach to parallelizing stochastic gradient descent[C]//Proceedings of Advances in Neural Information Processing Systems. 2011:693-701.
[16] CAO Shaosheng, LU Wei, XU Qiongkai. Grarep:learning graph representations with global structural information[C]//Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. Melbourne, Australia, 2005:891-900.
[17] TU Cunchao, ZHANG Weicheng, LIU Zhiyuan, et al. Max-margin deepwalk:discriminative learning of network representation[C]//Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. New York, USA, 2006:3889-3895.
[18] PAGE L, BRIN S, MOTWANI R, et al. The PageRank citation ranking:Bringing order to the web[R]. Palo Alto:Stanford InfoLab, 1999.
[19] KINGMA D, BA J. Adam:a method for stochastic optimization[C]//Proceedings of the 3rd International Conference on Learning Representations. San Diago, USA, 2015.
[20] HSU C W, CHANG C C, LIN C J. A practical guide to support vector classification. http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf.
[21] HANLEY J A, MCNEIL B J. The meaning and use of the area under a Receiver Operating Characteristic (ROC) curve[J]. Radiology, 1982, 143:29-36.

相似文献/References:

[1]常亮,张伟涛,古天龙,等.知识图谱的推荐系统综述[J].智能系统学报,2019,14(02):207.[doi:10.11992/tis.201805001]
 CHANG Liang,ZHANG Weitao,GU Tianlong,et al.Review of recommendation systems based on knowledge graph[J].CAAI Transactions on Intelligent Systems,2019,14(05):207.[doi:10.11992/tis.201805001]

备注/Memo

备注/Memo:
收稿日期:2019-09-19。
基金项目:国家自然科学基金项目(61309007,U1636219);国家重点研发计划课题资助项目(2016YFB0801303,2016QY01W0105).
作者简介:张潇鲲,男,1991年生,硕士研究生,主要研究方向为网络表示学习;刘琰,女,1979年生,副教授,博士,主要研究方向为网络信息安全、网络资源测绘。申请发明专利10项,授权5项,发表学术论文40余篇;陈静,女,1990年生,讲师,主要研究方向为数据挖掘、自然语言处理和社会网络分析。授权发明专利2项,发表学术论文10篇。
通讯作者:刘琰.E-mail:ms_liuyan@aliyun.com
更新日期/Last Update: 1900-01-01