[1]ZHANG Heng,HE Wenbin,HE Jun,et al.Multi-task tumor stage learning model with medical knowledge enhancement[J].CAAI Transactions on Intelligent Systems,2021,16(4):739-745.[doi:10.11992/tis.202010005]
Copy

Multi-task tumor stage learning model with medical knowledge enhancement

References:
[1] 姚云峰. 肿瘤分期与疗效评价[J]. 中国医学前沿杂志(电子版), 2010, 2(4): 70-75
YAO Yunfeng. Evaluation of tumor stage and curative effect[J]. Chinese journal of the frontiers of medical science (electronic version), 2010, 2(4): 70-75
[2] 周斌, 季科, 辛灵, 等. 美国肿瘤联合会乳腺癌分期系统(第8版)更新内容介绍及解读[J]. 中国实用外科杂志, 2017, 37(1): 10-14
ZHOU Bin, JI Ke, XIN Ling, et al. Updates and interpretations of the 8th edition of AJCC breast cancer staging system[J]. Chinese journal of practical surgery, 2017, 37(1): 10-14
[3] HU Zikun, LI Xiang, TU Cunchao, et al. Few-shot charge prediction with discriminative legal attributes[C]//Proceedings of the 27th International Conference on Computational Linguistics. New Mexico, USA, 2018: 487-498.
[4] KIM Y. Convolutional neural networks for sentence classification[C]//Proceedings of 2014 Conference on Empirical Methods in Natural Language Processing. Doha, Qatar, 2014: 1746-1751.
[5] TANG Duyu, QIN Bing, LIU Ting. Document modeling with gated recurrent neural network for sentiment classification[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal, 2015: 1422-1432.
[6] JOULIN A, GRAVE E, BOJANOWSKI P, et al. Bag of tricks for efficient text classification[C]//Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Valencia, Spain, 2017: 427-431.
[7] JOHNSON R, ZHANG Tong. Deep pyramid convolutional neural networks for text categorization[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada, 2017: 562-570.
[8] YAO Liang, MAO Chengsheng, LUO Yuang. Graph convolutional networks for text classification[C]//Proceedings of 32rd AAAI Conference on Artificial Intelligence. Hawaii, USA: 7370-7377.
[9] SUN Chi, QIU Xipeng, XU Yige, et al. How to fine-tune BERT for text classification?[C]//Proceedings of the 18th China National Conference on Chinese Computational Linguistics. Kunming, China, 2019: 194-206.
[10] ELHOSEINY M, SALEH B, ELGAMMAL A. Write a classifier: zero-shot learning using purely textual descriptions[C]//Proceedings of 2013 IEEE International Conference on Computer Vision. Sydney, Australia, 2013: 2584-2591.
[11] CUI Yiming, CHEN Zhipeng, WEI Si, et al. Attention-over-attention neural networks for reading comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada, 2017: 593-602.
[12] SEO M, KEMBHAVI A, FARHADI A, et al. Bidirectional attention flow for machine comprehension[EB/OL].(2016-11-05) [2019-10-12] https://arxiv.org/abs/1611.01603.
[13] PASZKE A, GROSS S, CHINTALA S, et al. Automatic differentiation in PyTorch[C]//Proceedings of the 31st Conference on Neural Information Processing Systems. Long Beach, USA, 2017.
[14] KINGMA D P, BA J. Adam: a method for stochastic optimization[EB/OL]. (2014-12-22) [2019-12-12] https://arxiv.org/pdf/1412.6980.pdf.
[15] SRIVASTAVA N, HINTON G, KRIZHEVSKY A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. The journal of machine learning research, 2014, 15(1): 1929-1958.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems