[1]LI Xu,CAI Biao,HU Nengbing.Research on graph contrastive learning method based on ternary mutual information[J].CAAI Transactions on Intelligent Systems,2024,19(5):1257-1267.[doi:10.11992/tis.202308004]
Copy

Research on graph contrastive learning method based on ternary mutual information

References:
[1] CAI Biao, ZHU Xinping, QIN Yangxin. Parameters optimization of hybrid strategy recommendation based on particle swarm algorithm[J]. Expert systems with applications, 2021, 168: 114388.
[2] CAI Biao, YANG Xiaowang, HUANG Yusheng, et al. A triangular personalized recommendation algorithm for improving diversity[J]. Discrete dynamics in nature and society, 2018, 2018: 3162068.
[3] HU Fenyu, ZHU Yanqiao, WU Shu, et al. Hierarchical graph convolutional networks for semi-supervised node classification[C]//Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. Macao: International Joint Conferences on Artificial Intelligence Organization, 2019: 4532–4539.
[4] KIPF T, WELLING M. Semi-supervised classification with graph convolutional networks[C]//Proceedings of the 5th International Conference on Learning Representations. Toulon: ICLR, 2017: 1–14.
[5] VELIKOVI P, CUCURULL G, CASANOVA A, et al. Graph attention networks[C]// Proceedings of the 6th International Conference on Learning Representations. Vancouver: ICLR, 2018: 1–12.
[6] LINSKER R. Self-organization in a perceptual network[J]. Computer, 1988, 21(3): 105–117.
[7] BACHMAN P, HJELM R D, BUCNWALTERuchwalter W. Learning representations by maximizing mutual information across views[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2019: 15535–15545.
[8] HE Kaiming, FAN Haoqi, WU Yuxin, et al. Momentum contrast for unsupervised visual representation learning[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 9726–9735.
[9] TIAN Yonglong, KRISHNAN D, ISOLA P. Contrastive multiview coding[C]//European Conference on Computer Vision. Cham: Springer, 2020: 776–794.
[10] COLLOBERT R, WESTON J. A unified architecture for natural language processing: deep neural networks with multitask learning[C]//Proceedings of the 25th International Conference on Machine Learning-ICML ’08. Helsinki: ACM, 2008: 160–167.
[11] MNIH A, KAVUKCUOGLU K. Learning word embeddings efficiently with noise-contrastive estimation[C]// Proceedings of the 26th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2013: 2265–2273.
[12] VELIKOVI P, FEDUS W, HAMILTON W L, et al. Deep graph infomax[C]//Proceedings of the 7th International Conference on Learning Representations. New Orleans: ICLR, 2018: 1–17.
[13] HASSANI K, KHASAHMADI A H. Contrastive multi-view representation learning on graphs[C]//Proceedings of the 37th International Conference on Machine Learning. New York: Association for Computing Machinery, 2020: 4116–4126.
[14] ZHU Yanqiao, XU Yichen, YU Feng, et al. Deep graph contrastive representation learning[EB/OL]. (2020–06–07)[2023–08–03]. http://arxiv.org/abs/2006.04131.
[15] SONG Jingkuan, ZHANG Hanwang, LI Xiangpeng, et al. Self-supervised video hashing with hierarchical binary auto-encoder[J]. IEEE transactions on image processing: a publication of the IEEE signal processing society, 2018, 27(7): 3210–3221.
[16] XU Xing, LU Huimin, SONG Jingkuan, et al. Ternary adversarial networks with self-supervision for zero-shot cross-modal retrieval[J]. IEEE transactions on cybernetics, 2020, 50(6): 2400–2413.
[17] VAN DEN OORD A, LI Yazhe, VINYALS O. Representation learning with contrastive predictive coding[EB/OL]. (2018–07–10) [2023–08–03]. http://arxiv.org/abs/1807.03748.
[18] HJELM R D, FEDOROV A, LAVOIE-MARCHILDON S, et al. Learning deep representations by mutual information estimation and maximization[EB/OL]. (2018–08–20)[2023–08–03]. http://arxiv.org/abs/1808.06670.
[19] CHEN Ting, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[C]//Proceedings of the 37th International Conference on Machine Learning. New York: ACM, 2020: 1597–1607.
[20] GRILL J B, STRUB F, ALTCHé F, et al. Bootstrap your own latent: A new approach to self-supervised learning[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2020: 21271–21284.
[21] ZBONTAR J, JING L, MISRA I, et al. Barlow Twins: self-supervised learning via redundancy reduction[C]//Proceedings of the 38th International Conference on Machine Learning. New York: Association for Computing Machinery, 2021: 1231012320.
[22] PEROZZI B, AL-RFOU R, SKIENA S. DeepWalk: online learning of social representations[C]//Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 2014: 701–710.
[23] GROVER A, LESKOVEC J. Node2Vec: scalable feature learning for networks[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco: ACM, 2016: 855–864.
[24] QIU Jiezhong, DONG Yuxiao, MA Hao, et al. Network embedding as matrix factorization: unifying deepwalk, line, pte, and node2vec[C]//Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining. New York: Association for Computing Machinery, 2018: 459–467.
[25] HAMILTON W L, YING R, LESKOVC J. Inductive representation learning on large graphs[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2017: 1025–1035.
[26] PENG Zhen, HUANG Wenbing, LUO Minnan, et al. Graph representation learning via graphical mutual information maximization[C]//Proceedings of The Web Conference 2020. New York: Association for Computing Machinery, 2020: 259–270.
[27] ZHU Yanqiao, XU Yichen, YU Feng, et al. Graph contrastive learning with adaptive augmentation[C]//Proceedings of the Web Conference 2021. New York: Association for Computing Machinery, 2021: 2069–2080.
[28] THAKOOR S, TALLEC C, AZAR M G, et al. Large-scale representation learning on graphs via bootstrapping[EB/OL]. (2021–02–12)[2023–08–03]. http://arxiv.org/abs/2102.06514.
[29] BIELAK P, KAJDANOWICZ T, CHAWLA N V. Graph barlow twins: a self-supervised representation learning framework for graphs[EB/OL]. (2021–06–04)[2023–08–03]. https://arxiv.org/abs/2106.02466.
[30] ZHANG Yifei, ZHU Hao, SONG Zixing, et al. COSTA: covariance-preserving feature augmentation for graph contrastive learning[C]//Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. New York: Association for Computing Machinery, 2022: 2524–2534.
[31] KINGMA D, BA J. Adam: a method for stochastic optimization[C]//Proceedings of the 3rd International Conference on Learning Representations. San Diego: ICLR, 2015: 1–15.
[32] CLEVERT D A, UNTERTHINER T, HOCHREITER S. Fast and accurate deep network learning by exponential linear units (ELUs)[C]//Proceedings of the 4th International Conference on Learning Representations. San Juan: ICLR, 2016: 1–14.
[33] KIPF T N, WELLING M. Variational graph auto-encoders[C]//Proceedings of the 30th ACM International Conference on Information & Knowledge Management. New York: Association for Computing Machinery, 2021: 2827–2831.
[34] MO Yujie, PENG Liang, XU Jie, et al. Simple unsupervised graph representation learning[C]//Proceedings of the 36th AAAI Conference on Artificial Intelligence. Palo Alto: AAAI Press, 2022, 36(7): 7797–7805.
Similar References:

Memo

-

Last Update: 2024-09-05

Copyright © CAAI Transactions on Intelligent Systems