[1]XU Jian.Generating reading comprehension questions automatically based on semantic graphs[J].CAAI Transactions on Intelligent Systems,2024,19(2):420-428.[doi:10.11992/tis.202207001]
Copy

Generating reading comprehension questions automatically based on semantic graphs

References:
[1] PEARSON P D. Handbook of research on reading comprehension[M]. London: Routledge, 2014: 27-55.
[2] 崔铁军, 李莎莎. 人和人工智能系统的概念形成过程研究[J]. 智能系统学报, 2022, 17(5): 1012–1020
CUI Tiejun, LI Shasha. Concept formation process of human and artificial intelligence systems[J]. CAAI transactions on intelligent systems, 2022, 17(5): 1012–1020
[3] ZHOU Qingyu, YANG Nan, WEI Furu, et al. Neural question generation from text: a preliminary study[C]//HUANG X, JIANG J, ZHAO D, et al. National CCF Conference on Natural Language Processing and Chinese Computing. Cham: Springer, 2018: 662-671.
[4] ZHAO Yao, NI Xiaochuan, DING Yuanyuan, et al. Paragraph-level neural question generation with maxout pointer and gated self-attention networks[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2018: 3901-3910.
[5] RAJPURKAR P, ZHANG Jian, LOPYREV K, et al. SQuAD: 100, 000+ questions for machine comprehension of text[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2016: 2383-2392.
[6] KO?ISK? T, SCHWARZ J, BLUNSOM P, et al. The Narrative QA reading comprehension challenge[J]. Transactions of the association for computational linguistics, 2018, 6: 317–328.
[7] YANG Zhilin, QI Peng, ZHANG Saizheng, et al. HotpotQA: a dataset for diverse, explainable multi-hop question answering[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2018: 2369-2380.
[8] LAI Guokun, XIE Qizhe, LIU Hanxiao, et al. RACE: large-scale ReAding comprehension dataset from examinations[C]//Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2017: 785-794.
[9] 李倩玉, 王蓓, 金晶, 等. 基于双向LSTM卷积网络与注意力机制的自动睡眠分期模型[J]. 智能系统学报, 2022, 17(3): 523–530
LI Qianyu, WANG Bei, JIN Jing, et al. Automatic sleep staging model based on the bi-directional LSTM convolutional network and attention mechanism[J]. CAAI transactions on intelligent systems, 2022, 17(3): 523–530
[10] DU Xinya, SHAO Junru, CARDIE C. Learning to ask: neural question generation for reading comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: Association for Computational Linguistics, 2017: 1342-1352.
[11] 刘明, 张津旭, 吴忠明. 智能提问技术及其教育应用[J]. 人工智能, 2022, 9(2): 30–38
LIU Ming, ZHANG Jinxu, WU Zhongming. Intelligent questioning technology and its educational application[J]. AI-View, 2022, 9(2): 30–38
[12] RUS V, PIWEK P, STOYANCHEV S, et al. Question generation shared task and evaluation challenge: status report[C]//Proceedings of the 13th European Workshop on Natural Language Generation. New York: ACM, 2011: 318-320.
[13] DHOLE K, MANNING C D. Syn-QG: syntactic and shallow semantic rules for question generation[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg: Association for Computational Linguistics, 2020: 752-765.
[14] HEILMAN M, SMITH N A. Good question! Statistical ranking for question generation[C]//HLT ’10: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. New York: ACM, 2010: 609-617.
[15] YAO Xuchen, BOUMA G, ZHANG Yi. Semantics-based question generation and implementation[J]. Dialogue & discourse, 2012, 3(2): 11–42.
[16] LABUTOV I, BASU S, VANDERWENDE L. Deep questions without deep understanding[C]//Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Stroudsburg: Association for Computational Linguistics, 2015: 889-898.
[17] SONG Linfeng, WANG Zhiguo, HAMZA W. A unified query-based generative model for question generation and question answering[EB/OL]. (2017-09-04)[2020-01-01]. https://arxiv.org/abs/1709.01058.pdf.
[18] YUAN Wei, HE Tieke, DAI Xinyu. Improving neural question generation using deep linguistic representation[C]//Proceedings of the Web Conference 2021. Ljubljana, Slovenia. New York: ACM, 2021: 3489-3500.
[19] JIA Xin, ZHOU Wenjie, SUN Xu, et al. EQG-RACE: examination-type question generation[C]//Proceedings of the AAAI Conference on Artificial Intelligence. New York: AAAI, 2021: 13143-13151.
[20] 李武波, 张蕾, 舒鑫. 基于Seq2Seq的生成式自动问答系统应用与研究[J]. 现代计算机, 2017(36): 57–60
LI Wubo, ZHANG Lei, SHU Xin. Application and research on generative automatic question answering system based on Seq2Seq[J]. Modern computer, 2017(36): 57–60
[21] 李景聪, 潘伟健, 林镇远, 等. 采用多路图注意力网络的情绪脑电信号识别方法[J]. 智能系统学报, 2022, 17(3): 531–539
LI Jingcong, PAN Weijian, LIN Zhenyuan, et al. Emotional EEG signal recognition method using multi-path graph attention network[J]. CAAI transactions on intelligent systems, 2022, 17(3): 531–539
[22] ZHANG Yuhao, QI Peng, MANNING C D. Graph convolution over pruned dependency trees improves relation extraction[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2018: 2205-2215.
[23] MANNING C, SURDEANU M, BAUER J, et al. The stanford CoreNLP natural language processing toolkit[C]//Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Stroudsburg: Association for Computational Linguistics, 2014: 55-60.
[24] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all You need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000-6010.
[25] LUONG T, PHAM H, MANNING C D. Effective approaches to attention-based neural machine translation[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2015: 1412-1421.
[26] SEE A, LIU P J, MANNING C D. Get to the point: summarization with pointer-generator networks[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg: Association for Computational Linguistics, 2017: 1073-1083.
[27] PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2014: 1532-1543.
[28] LIU Bingran. Neural question generation based on Seq2Seq[C]//Proceedings of the 2020 5th International Conference on Mathematics and Artificial Intelligence. New York: ACM, 2020: 119-123.
[29] SUN Xingwu, LIU Jing, LYU Yajuan, et al. Answer-focused and position-aware neural question generation[C]//Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2018: 3930-3939.
[30] LOPEZ L E, CRUZ D K, CRUZ J C B, et al. Simplifying paragraph-level question generation via transformer language models[C]//Pham DN, Theeramunkong T, Governatori G, et al. Pacific Rim International Conference on Artificial Intelligence. Cham: Springer, 2021: 323-334.
[31] ZHANG Shiyue, BANSAL M. Addressing semantic drift in question generation for semi-supervised question answering[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2019.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems