[1]ZHANG Xinpei,ZHOU Yao,ZHANG Yi.Knowledge distillation method for fetal ultrasound section identification[J].CAAI Transactions on Intelligent Systems,2022,17(1):181-191.[doi:10.11992/tis.202105007]
Copy

Knowledge distillation method for fetal ultrasound section identification

References:
[1] MARACI M A, NAPOLITANO R, PAPAGEORGHIOU A, et al. Searching for structures of interest in an ultrasound video sequence[C]//Proceedings of the 5th International Workshop on Machine Learning in Medical Imaging. Boston, USA, 2014: 133–140.
[2] PLATT J. Sequential minimal optimization: a fast algorithm for training support vector machines[J]. Advances in kernel methods-support vector learning, 1998, 7:1–22.
[3] BAUMGARTNER C F, KAMNITSAS K, MATTHEW J, et al. SonoNet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound[J]. IEEE transactions on medical imaging, 2017, 36(11): 2204–2215.
[4] MARACI M A, BRIDGE C P, NAPOLITANO R, et al. A framework for analysis of linear ultrasound videos to detect fetal presentation and heartbeat[J]. Medical image analysis, 2017, 37: 22–36.
[5] LAFFERTY J D, MCCALLUM A, PEREIRA F C N. Conditional random fields: probabilistic models for segmenting and labeling sequence data[C]//Proceedings of the 18th International Conference on Machine Learning. San Francisco, USA, 2001: 282–289.
[6] RYOU H, YAQUB M, CAVALLARO A, et al. Automated 3D ultrasound biometry planes extraction for first trimester fetal assessment[C]//Proceedings of the 7th International Workshop on Machine Learning in Medical Imaging. Athens, Greece, 2016: 196–204.
[7] CHENG P M, MALHI M S. Transfer learning with convolutional neural networks for classification of abdominal ultrasound images[J]. Journal of digital imaging, 2017, 30(2): 234–243.
[8] DONAHUE J. CaffeNet(GitHubPage)[EB/OL]. (2016-05-6)[2021-05-11].https://github.com/BVLC/caffe/tree/master/models/bvlc_reference_caffenet
[9] SIMONYAN K. VGG team ILSVRC-2014 model with 16 weight layers (GitHub Page)[EB/OL]. (2016-05-16)[2021-05-11]. https://gist.github.com/ksimonyan/211839e770f7b538e2d8
[10] HOWARD A G, ZHU Menglong, CHEN Bo, et al. MobileNets: efficient convolutional neural networks for mobile vision applications[EB/OL]. (2017-04-17)[2021-05-11]. https://arxiv.org/abs/1704.04861v1.
[11] SANDLER M, HOWARD A, ZHU Menglong, et al. MobileNetV2: inverted residuals and linear bottlenecks[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA, 2018: 4510–4520.
[12] ANDREW A, SANDLER M, GRACE C, et al. Searching for mobileNetV3[EB/OL]. (2019-05-06)[2021-05-11].https://arxiv.org/abs/1905.02244v3.
[13] BUCILUǎ C, CARUANA R, NICULESCU-MIZIL A. Model compression[C]//Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Philadelphia, USA, 2006: 535–541.
[14] ROMERO A, BALLAS N, KAHOU S E, et al. FitNets: hints for thin deep nets[J]. (2014-12-19)[2021-05-11]. https://arxiv.org/abs/1412.6550v2.
[15] HINTON G, VINYALS O, DEAN J. Distilling the knowledge in a neural network[EB/OL]. (2015-03-09)[2021-05-11].https://arxiv.org/abs/1503.02531.
[16] JIN Xiao, PENG Baoyun, WU Yichao, et al. Knowledge distillation via route constrained optimization[C]//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision . Seoul, Korea , 2019: 1345–1354.
[17] PHUONG M H, LAMPERT C H. Towards understanding knowledge distillation[C]//Proceedings of the 36th International Conference on Machine Learning. Long Beach, USA, 2019.
[18] JI Guangda, ZHU Zhanxing. Knowledge distillation in wide neural networks: risk bound, data efficiency and imperfect teacher[EB/OL]. (2020-10-20)[2021-05-11].https://arxiv.org/abs/2010.10090.
[19] ZHANG Ying, XIANG Tao, HOSPEDALES T M, et al. Deep mutual learning[C]//Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City, USA, 2018: 4320–4328.
[20] MOBAHI H, FARAJTABAR M, BARTLETT P L. Self-distillation amplifies regularization in Hilbert space[EB/OL]. (2020-02-13)[2021-05-11].https://arxiv.org/abs/2002.05715v1.
[21] ZHAI Mengyao, CHEN Lei, TUNG F, et al. Lifelong GAN: continual learning for conditional image generation[C]//Proceedings of 2019 IEEE/CVF International Conference on Computer Vision . Seoul, South Korea, 2019: 2759–2768.
[22] YUAN Li, TAY F E H , LI Guilin, et al. Revisit knowledge distillation: a teacher-free framework[C]//Proceedings of the International Conference on Learning Representation. Addis Ababa, Ethiopia, 2020.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems