[1]YU Runyu,LI Yawen,LI Ang.Semantic similarity computing for scientific and technological conferences[J].CAAI Transactions on Intelligent Systems,2022,17(4):737-743.[doi:10.11992/tis.202203050]
Copy

Semantic similarity computing for scientific and technological conferences

References:
[1] 周园春, 王卫军, 乔子越, 等. 科技大数据知识图谱构建方法及应用研究综述[J]. 中国科学: 信息科学, 2020, 50(7): 957?987.
ZHOU Yuanchun, WANG Weijun, QIAO Ziyue, et al. A survey on the construction methods and applications of sci-tech big data knowledge graph[J]. Scientia sinica (informationis), 2020, 50(7): 957?987.
[2] 苏晓娟, 张英杰, 白晨, 等. 科技大数据背景下的中英双语语料库的构建及其特点研究[J]. 中国科技资源导刊, 2019, 51(6): 87?92.
SU Xiaojuan, ZHANG Yingjie, BAI Chen, et al. Research of bilingual corpus construction and its characteristics in big data[J]. China science & technology resources review, 2019, 51(6): 87?92.
[3] 胡吉颖, 谢靖, 钱力, 等. 基于知识图谱的科技大数据知识发现平台建设[J]. 数据分析与知识发现, 2019, 3(1): 55?62.
HU Jiying, XIE Jing, QIAN Li, et al. Constructing big data platform for sci-tech knowledge discovery with knowledge graph[J]. Data analysis and knowledge discovery, 2019, 3(1): 55?62.
[4] TONG Yuqiang, GU Lize. A news text clustering method based on similarity of text labels[M]//Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering. Cham: Springer International Publishing, 2019: 496-503.
[5] LI Wenling, JIA Yingmin, DU Junping, et al. Distributed multiple-model estimation for simultaneous localization and tracking with NLOS mitigation[J]. IEEE transactions on vehicular technology, 2013, 62(6): 2824–2830.
[6] DAS A, MANDAL J, DANIAL Z, et al. A novel approach for automatic Bengali question answering system using semantic similarity analysis[J]. International journal of speech technology, 2020, 23(4): 873–884.
[7] FANG Yuke, DENG Weihong, DU Junping, et al. Identity-aware CycleGAN for face photo-sketch synthesis and recognition[J]. Pattern recognition, 2020, 102: 107249.
[8] QIAN Ming, LIU J, LI Chaofeng, et al. A comparative study of English-Chinese translations of court texts by machine and human translators and the Word2Vec based similarity measure’s ability to gauge human evaluation biases[C]//Proceedings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks. Dublin: ACL, 2019: 95?100.
[9] XUE Zhe, DU Junping, DU Dawei, et al. Deep low-rank subspace ensemble for multi-view clustering[J]. Information sciences, 2019, 482: 210–227.
[10] RISTAD E S, YIANILOS P N. Learning string-edit distance[J]. IEEE transactions on pattern analysis and machine intelligence, 1998, 20(5): 522–532.
[11] HU Weiming, GAO Jun, LI Bing, et al. Anomaly detection using local kernel density estimation and context-based regression[J]. IEEE transactions on knowledge and data engineering, 2020, 32(2): 218–233.
[12] SUPHAKIT Niwattanakul, JATSADA Singthongchai, EKKACHAI Naenudorn, et al. Using of Jaccard coefficient for keywords similarity[C]//Proceedings of the international multiconference of engineers and computer scientists. Hong Kong: Newswood Limited, 2013, 1(6): 380?384.
[13] KOU Feifei, DU Junping, HE Yijiang, et al. Social network search based on semantic analysis and learning[J]. CAAI transactions on intelligence technology, 2016, 1(4): 293–302.
[14] LI Wenling, JIA Yingmin, DU Junping. Variance-constrained state estimation for nonlinearly coupled complex networks[J]. IEEE transactions on cybernetics, 2018, 48(2): 818–824.
[15] MIKOLOV T, CHEN KAI, CORRADO G, et al. Efficient estimation of word representations in vector space[EB/OL]. New York: arXiv, 2013. (2013?01?16) [2022?03?24]. https: //arxiv. org/abs/1301.3781.
[16] PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg: Association for Computational Linguistics, 2014: 1532?1543.
[17] PETERS M E, NEUMANN M, IYYER M, et al. Deep contextualized word representations[EB/OL]. New York: arXiv, 2018. (2018?03?22)[2020?07?01]. https://arxiv.org/abs/1802.05365.
[18] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all You need[C]//NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems. New York: ACM, 2017: 6000?6010.
[19] RADFORD A, NARASIMHAN K. Improving language understanding by generative pre-training[EB/OL]. (2018?11?05)[2020?07?01].https://www.semanticscholar.org/paper/Improving-Language-Understanding-by-Generative-Radford-Narasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035.
[20] DEVLIN J, CHANG MING-WEI, LEE K, et al. BERT: pre-training of deep bidirectional transformers for language understanding[EB/OL]. New York: arXiv, 2018. (2018?10?11)[2022?03?20]. https: //arxiv. org/abs/1810.04805.
[21] HUANG Posen, HE Xiaodong, GAO Jianfeng, et al. Learning deep structured semantic models for web search using clickthrough data[C]//CIKM’13: Proceedings of the 22nd ACM international conference on Information & Knowledge Management. New York: ACM, 2013: 2333?2338.
[22] PALANGI H, DENG L, SHEN Y, et al. Semantic modelling with long-short-term memory for information retrieval[EB/OL]. New York: arXiv, 2014. (2014?12?20) [2022?03?20].https://arxiv.org/abs/1412.6629.
[23] PONTES E L, HUET S, LINHARES A C, et al. Predicting the semantic textual similarity with Siamese CNN and LSTM[EB/OL]. New York: arXiv, 2018. (2018?10?24) [2022?03?20]. https: //arxiv. org/abs/1810.10641.
[24] REIMERS N, GUREVYCH I. Sentence-BERT: sentence embeddings using Siamese BERT-networks[EB/OL]. New York: arXiv, 2019. (2019?08?27) [2022?03?20]. https: //arxiv. org/abs/1908.10084.
[25] LI BOHAN, ZHOU HAO, HE JUNXIAN, et al. On the sentence embeddings from pre-trained language models[EB/OL]. New York: arXiv, 2020. (2020?11?02) [2022?03?20]. https: //arxiv. org/abs/2011.05864.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems