[1]LIU Wei,WANG Xinyu,LIU Guangwei,et al.Semi-supervised image classification method fused with relational features[J].CAAI Transactions on Intelligent Systems,2022,17(5):886-899.[doi:10.11992/tis.202109022]
Copy

Semi-supervised image classification method fused with relational features

References:
[1] 闵帆, 王宏杰, 刘福伦, 等. SUCE: 基于聚类集成的半监督二分类方法[J]. 智能系统学报, 2018, 13(6): 974–980
MIN Fan, WANG Hongjie, LIU Fulun, et al. SUCE: semi-supervised binary classification based on clustering ensemble[J]. CAAI transactions on intelligent systems, 2018, 13(6): 974–980
[2] 姚晓红,黄恒君. 非负半监督函数型聚类方法[J]. 计算机科学与探索, 2021, 15(12): 2438–2448
YAO Xiaohong, HUANG Hengjun. Semi-supervised clustering method for non-negative functional data[J]. Journal of frontiers of computer science and technology, 2021, 15(12): 2438–2448
[3] SAJJADI M, JAVANMARDI M, TASDIZEN T. Regularization with stochastic transformations and perturbations for deep semi-supervised learning[C]//NIPS’16: Proceedings of the 30th International Conference on Neural Information Processing Systems. New York: ACM, 2016: 1171?1179.
[4] CUBUK E D, ZOPH B, MANé D, et al. AutoAugment: learning augmentation strategies from data[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, IEEE, 2019: 113?123.
[5] LAINE S, AI L A. Temporal ensembling for semi-supervised learning[EB/OL]. (2017?03?15)[2021?01?01]. https://arxiv.org/abs/1610.02242.
[6] LI Yingting, LIU Lu, TAN R T. Certainty-driven consistency loss for semi-supervised learning [EB/OL]. (2017?05?07)[2021?05?01]. https://arxiv.org/abs/1901.05657v1.
[7] ATHIWARATKUN B, FINZI M, IZMAILOV P, et al. There are many consistent explanations of unlabeled data: why You should average [EB/OL]. (2019?02?21)[2021?02?15]. https://arxiv.org/abs/1806.05594.
[8] SZEGEDY C, LIU Wei, JIA Yangqing, et al. Going deeper with convolutions[C]//2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston, MA. IEEE, 2015: 1?9.
[9] CHEN Yunpeng, ROHRBACH M, YAN Zhicheng, et al. Graph-based global reasoning networks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, IEEE, 2019: 433?442.
[10] LIU Qinghui, KAMPFFMEYER M, JENSSEN R, et al. Self-constructing graph convolutional networks for semantic labeling[C]//IGARSS 2020 - 2020 IEEE International Geoscience and Remote Sensing Symposium. Waikoloa, IEEE, 2020: 1801?1804.
[11] JIANG Bo, ZHANG Ziyan, LIN Doudou, et al. Semi-supervised learning with graph learning-convolutional networks[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach, IEEE, 2019: 11305?11312.
[12] ZHU? Xiaojin, GHAHRAMANI?H Zoubin. Learning from labeled and unlabeled data with label propagation. CMU-CALD-02-107[R]. Pittsburgh: Carnegie Mellon University, 2002.
[13] WESTON J, RATLE F, MOBAHI H, et al. Deep learning via semi-supervised embedding[M]//Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012: 639?655.
[14] GOODFELLOW I, POUGET-ABADIE J, MIRZA M, et al. Generative adversarial nets[J]. Advances in neural information processing systems, 2014, 27(1): 2672–2680.
[15] SPRINGENBERG J T. Unsupervised and semi-supervised learning with categorical generative adversarial networks[EB/OL].(2016?04?30)[2020?05?15].https://arxiv.org/abs/1511.06390.
[16] CHANG Chuanyu, CHEN T Y, CHUNG P C. Semi-supervised learning using generative adversarial networks[C]//2018 IEEE Symposium Series on Computational Intelligence. Bangalore, IEEE, 2018: 892?896.
[17] BELKIN M, NIYOGI P. Laplacian eigenmaps for dimensionality reduction and data representation[J]. Neural computation, 2003, 15(6): 1373–1396.
[18] NAIR V, HINTON G E. Rectified linear units improve restricted boltzmann machines[C]//Proceedings of the ICML. Haifa, Israel, 2010: 807?814.
[19] KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks[EB/OL]. (2017?02?22)[2019?12?15]. https://arxiv.org/abs/1609.02907.
[20] MOORE A W. An intoductory tutorial on kd-trees. Efficient Memory-based Learning for Robot Control - 209[R]. Computer Laboratory, University of Cambridge. 1991.
[21] GOODFELLOW I J, BULATOV Y, IBARZ J, et al. Multi-digit number recognition from street view imagery using deep convolutional neural networks[EB/OL]. (2014?04?14)[2019?06?05]. https://arxiv.org/abs/1312.6082.
[22] SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[EB/OL]. (2015?04?10)[2020?04?20]. https://arxiv.org/abs/1409.1556
[23] ZAGORUYKO S, KOMODAKIS N. Wide residual networks[EB/OL]. (2017?06?14)[2020?12?15]. https://arxiv.org/abs/1605.07146.
[24] OLIVER A, ODENA A, RAFFEL C A, et al. Realistic evaluation of deep semi-supervised learning algorithms[J]. Advances in neural information processing systems, 2018, 31(1): 3235–3246.
[25] TARVAINEN A, VALPOLA H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results[J]. Advances in neural information processing systems, 2017, 30(1): 1195–1204.
[26] MIYATO T, MAEDA S I, KOYAMA M, et al. Virtual adversarial training: a regularization method for supervised and semi-supervised learning[J]. IEEE transactions on pattern analysis and machine intelligence, 2019, 41(8): 1979–1993.
[27] ZHANG Hongyi, CISSE M, DAUPHIN Y N, et al. mixup: Beyond empirical risk minimization[EB/OL].(2018?04?27)[2020?04?01]. https://arxiv.org/abs/1710.09412.
[28] LEE D H. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks[C]//Workshop on challenges in representation learning, ICML. 2013, 3(2): 896?901.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems