[1]白宇康,陈彦敏,樊小超,等.图神经网络和数值诱导正则化的数值推理方法[J].智能系统学报,2024,19(5):1268-1276.[doi:10.11992/tis.202308045]
 BAI Yukang,CHEN Yanmin,FAN Xiaochao,et al.Numerical reasoning method for graph neural networks and numerically induced regularization[J].CAAI Transactions on Intelligent Systems,2024,19(5):1268-1276.[doi:10.11992/tis.202308045]
点击复制

图神经网络和数值诱导正则化的数值推理方法

参考文献/References:
[1] 杜永萍, 赵以梁, 阎婧雅, 等. 基于深度学习的机器阅读理解研究综述[J]. 智能系统学报, 2022, 17(6): 1074-1083.
DU Yongping, ZHAO Yiliang, YAN Jingya, et al. Survey of machine reading comprehension based on deep learning[J]. CAAI transactions on intelligent systems, 2022, 17(6): 1074-1083.
[2] CHEN Xinyun, LIANG Chen, YU Wei Adams, et al. Neural symbolic reader: scalable integration of distributed and symbolic representations for reading comprehen-sion[C]//2020 International Conference on Learning Representations. Addis Ababa: ICLR, 2020: 1–16.
[3] ZHOU Yongwei, BAO Junwei, DUAN Chaoqun, et al. OPERA: operation-pivoted discrete reasoning over text[C]//Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Seattle: ACL, 2022: 1655–1666.
[4] GEVA M, GUPTA A, BERANT J. Injecting numerical reasoning skills into language models[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Online: ACL, 2020: 946–958.
[5] KIM J, KANG Junmo, KIM K M, et al. Exploiting numerical-contextual knowledge to improve numerical reasoning in question answering[C]//Findings of the Association for Computational Linguistics. Seattle: ACL, 2022: 1811–1821.
[6] PI Xinyu, LIU Qian, CHEN Bei, et al. Reasoning like program executors[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Abu Dhabi: ACL, 2022: 761–779.
[7] HU Minghao, PENG Yuxing, HUANG Zhen, et al. A multi-type multi-span network for reading comprehension that requires discrete reasoning[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Hong Kong: ACL, 2019: 1596–1606.
[8] RAN Qiu, LIN Yankai, LI Peng, et al. NumNet: machine reading comprehension with numerical reasoning[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Hong Kong: ACL, 2019: 2474–2484.
[9] CHEN Kunlong, XU Weidi, CHENG Xingyi, et al. Question directed graph attention network for numerical reasoning over text[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Online: ACL, 2020: 6759–6768.
[10] 孟祥福, 温晶, 李子函, 等. 多重注意力指导下的异构图嵌入方法[J]. 智能系统学报, 2023, 18(4): 688-698.
MENG Xiangfu, WEN Jing, LI Zihan, et al. Heterogeneous graph embedding method guided by the multi-attention mechanism[J]. CAAI transactions on intelligent systems, 2023, 18(4): 688-698.
[11] DUA D, WANG Yizhong, DASIGI P, et al. DROP: a reading comprehension benchmark requiring discrete reasoning over paragraphs[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. Minneapolis: NAACL, 2019: 2368–2378.
[12] DEVLIN J, CHANG Mingwei, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. Minneapolis: NAACL, 2019: 4171–4186.
[13] LIU Zhuang, LIN Wayne, SHI Ya, et al. A robustly optimized bert pre-training approach with post-training [C]//Proceedings of the 20th Chinese National Conference on Computational Linguistics. Huhhot: CCL, 2021: 1218–1227.
[14] SCARSELLI F, GORI M, TSOI A C, et al. The graph neural network model[J]. IEEE transactions on neural networks, 2009, 20(1): 61-80.
[15] KIPF T, WELLING M. Semi-supervised classification with graph convolutional networks[EB/OL]. (2016-09-09)[2023-08-31]. https://arxiv.org/abs/1609.02907.
[16] 吴国栋, 查志康, 涂立静, 等. 图神经网络推荐研究进展[J]. 智能系统学报, 2020, 15(1): 14-24.
WU Guodong, ZHA Zhikang, TU Lijing, et al. Research advances in graph neural network recommendation[J]. CAAI transactions on intelligent systems, 2020, 15(1): 14-24.
[17] HAMILTON W, YING R, LESKOVEC L. Inductive representation learning on large graphs[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach: NIPS, 2017: 1025–1035.
[18] VELI?KOVI? P, CUCURULL G, CASANOVA A, et al. Graph attention networks[EB/OL]. (2017-10-30)[2023-08-31]. https://arxiv.org/abs/1710.10903.
[19] HU Ziniu, DONG Yuxiao, WANG Kuansan, et al. Heterogeneous graph transformer[C]//Proceedings of The Web Conference 2020. Taipei: ACM, 2020: 2704–2710.
[20] 谢小杰, 梁英, 王梓森, 等. 基于图卷积的异质网络节点分类方法[J]. 计算机研究与发展, 2022, 59(7): 1470-1485.
XIE Xiaojie, LIANG Ying, WANG Zisen, et al. Heterogeneous network node classification method based on graph convolution[J]. Journal of computer research and development, 2022, 59(7): 1470-1485.
[21] 任嘉睿, 张海燕, 朱梦涵, 等. 基于元图卷积的异质网络嵌入学习算法[J]. 计算机研究与发展, 2022, 59(8): 1683-1693.
REN Jiarui, ZHANG Haiyan, ZHU Menghan, et al. Embedding learning algorithm for heterogeneous network based on meta-graph convolution[J]. Journal of computer research and development, 2022, 59(8): 1683-1693.
[22] MANNING C, SURDEANU M, BAUER J, et al. The stanford CoreNLP natural language processing toolkit[C]//Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Baltimore: ACL, 2014: 55–60.
[23] XU Runxin, LIU Tianyu, LI Lei et al. Document-level event extraction via heterogeneous graph-based interac- tion model with a tracker[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Online: ACL, 2021: 3533-3546.
[24] THAWANI A, RUJARA J, LLIEVSKI F, et al. Representing numbers in NLP: a survey and a vision[C]//Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Online: ACL, 2021: 644-656.
[25] SUNDARARAMAN D, SI Shijing, SUBRAMANIAN V, et al. Methods for numeracy-preserving word embeddings[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Online: ACL, 2020: 4742–4753.
相似文献/References:
[1]张恒,何文玢,何军,等.医学知识增强的肿瘤分期多任务学习模型[J].智能系统学报,2021,16(4):739.[doi:10.11992/tis.202010005]
 ZHANG Heng,HE Wenbin,HE Jun,et al.Multi-task tumor stage learning model with medical knowledge enhancement[J].CAAI Transactions on Intelligent Systems,2021,16():739.[doi:10.11992/tis.202010005]
[2]杜永萍,赵以梁,阎婧雅,等.基于深度学习的机器阅读理解研究综述[J].智能系统学报,2022,17(6):1074.[doi:10.11992/tis.202107024]
 DU Yongping,ZHAO Yiliang,YAN Jingya,et al.Survey of machine reading comprehension based on deep learning[J].CAAI Transactions on Intelligent Systems,2022,17():1074.[doi:10.11992/tis.202107024]

备注/Memo

收稿日期:2023-8-31。
基金项目:新疆维吾尔自治区自然科学基金项目(2022D01A227);国家自然科学基金项目(62066044).
作者简介:白宇康,硕士研究生,主要研究方向为抽取式机器阅读理解。E-mail:1465215696@qq.com;陈彦敏,讲师,主要研究方向为数据挖掘和自然语言处理。参与国家自然科学基金项目2项,授权发明专利3项。E-mail:ymchen16@mail.ustc.edu.cn;樊小超,副教授,主要研究方向为自然语言处理、文本情感分析、隐式情感分析、生物文本知识挖掘、基于认知的幽默计算、反讽识别和多模态情感分析。主持国家自然科学基金项目1项,发表学术论文30余篇。E-mail:37769630@qq.com。
通讯作者:陈彦敏. E-mail:ymchen16@mail.ustc.edu.cn

更新日期/Last Update: 2024-09-05
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com