[1]ZHOU Kairui,LIU Xin,JING Liping,et al.Concept-driven discriminative feature learning for few-shot learning[J].CAAI Transactions on Intelligent Systems,2023,18(1):162-172.[doi:10.11992/tis.202203061]
Copy

Concept-driven discriminative feature learning for few-shot learning

References:
[1] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770?778.
[2] SNELL J, SWERSKY K, ZEMEL R S. Prototypical networks for few-shot learning[C]//Advances in Neural Information Processing Systems. Cambridge: MIT Press, 2017: 4077? 4087.
[3] SUNG F, YANG Yongxin, ZHANG Li, et al. Learning to compare: relation network for few-shot learning[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 1199?1208.
[4] CHEN Weiyu, LIU Yencheng, KIRA Z, et al. A closer look at few-shot classification[C]//Proceedings of the International Conference on Learning Representations. LaJolla, CA: ICLR, 2018.
[5] ZHANG Chi, CAI Yujun, LIN Guosheng, et al. DeepEMD: few-shot image classification with differentiable earth mover’s distance and structured classifiers[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 12200?12210.
[6] YE Hanjia, HU Hexiang, ZHAN Dechuan, et al. Few-shot learning via embedding adaptation with set-to-set functions[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 8805-8814.
[7] LI Wenbin, WANG Lei, XU Jinglin, et al. Revisiting local descriptor based image-to-class measure for few-shot learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 7253?7260.
[8] ORESHKIN B N, RODRIGUEZ P, LACOSTE A. TADAM: Task dependent adaptive metric for improved few-shot learning[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2018: 719?729
[9] SACHIN Ravi, HUGO Larochelle. Optimization as a model for few-shot learning [C]//Proceedings of the International Conference on Learning Representations. LaJolla: ICLR, 2017.
[10] FINN C, ABBEEL P, LEVINE S. Model-agnostic meta-learning for fast adaptation of deep networks[C]//International Conference on Machine Learning. New York: ACM, 2017: 1126?1135.
[11] VINYALS O, BLUNDELL C, LILLICRAP T, et al. Matching networks for one shot learning[C]//Proceedings of the 30th International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2016: 3637?3645.
[12] HOU R, CHANG H, MA B, et al. Cross attention network for few-shot classification[C]//Advances in Neural Informa-tion Processing Systems. Cambridge: MIT Press, 2019: 4003?4014.
[13] XING C, ROSTAMZADEH N, ORESHKIN B N, et al. Adaptive cross-modal few-shot learning[C]//Advances in Neural Information Processing Systems. Cambridge: MIT Press, 2019: 4847?4857.
[14] LIFCHITZ Y, AVRITHIS Y, PICARD S, et al. Dense classification and implanting for few-shot learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 9250?9259.
[15] 王德文, 魏波涛. 基于孪生变分自编码器的小样本图像分类方法[J]. 智能系统学报, 2021, 16(2): 254–262
WANG Dewen, WEI Botao. A small-sample image classification method based on a Siamese variational auto-encoder[J]. CAAI Transactions on Intelligent Systems, 2021, 16(2): 254–262
[16] 张玲玲, 陈一苇, 吴文俊, 等. 基于对比约束的可解释小样本学习[J]. 计算机研究与展, 2021, 58(12): 2573–2584
ZHANG Lingling, CHEN Yiwei, WU Wenjun, et al. Interpretable few-shot learning with contrastive constraint[J]. Journal of computer research and development, 2021, 58(12): 2573–2584
[17] 韦世红, 刘红梅, 唐宏, 等. 多级度量网络的小样本学习[J/OL]. 计算机工程与应用, 2021: 1?10. (2021?10?12) [2023?01?05]. https://kns.cnki.net/kcms/detail/11.2127.TP.20211012.1508.008.html.
WEI Shihong, LIU Hongmei, TANG Hong, et al. Multilevel metric networks for few-ShotLearning[J/OL]. Computer engineering and applications, 2021: 1?10. (2021?10?12) [2023?01?05]. https://kns.cnki.net/kcms/detail/11.2127.TP.20211012.1508.008.html.
[18] ZHENG Yan, WANG Ronggui, YANG Juan, et al. Principal characteristic networks for few-shot learning[J]. Journal of visual communication and image representation, 2019, 59: 563–573.
[19] XU Chengming, FU Yanwei, LIU Chen, et al. Learning dynamic alignment via meta-filter for few-shot learning[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Nashville. IEEE, 2021: 5178-5187.
[20] GARCIA V, BRUNA J. Few-shot learning with graph neural networks[C]//Proceedings of the International Conference on Learning Representations. LaJolla: ICLR, 2018.
[21] ZHANG H, CISSE M, DAUPHIN Y N, et al. Mixup: Beyond empirical risk minimization[C]//Proceedings of the Interna-tional Conference on Learning Representations. LaJolla: ICLR, 2018.
[22] YUN S, HAN D, CHUN S, et al. CutMix: regularization strategy to train strong classifiers with localizable features[C]//2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 6022?6031.
[23] VERMA V, LAMB A, BECKHAM C, et al. Manifold mixup: Better representations by interpolating hidden states[C]// International Conference on Machine Learning. New York: ACM, 2019: 6438?6447.
[24] WANG Yuxiong, GIRSHICK R, HEBERT M, et al. Low-shot learning from imaginary data[C]//2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7278?7286.
[25] NI R, GOLDBLUM M, SHARAF A, et al. Data augmentation for meta-learning[C]//International Conference on Machine Learning. New York: ACM, 2021: 8152?8161.
[26] SEO J W, JUNG H G, LEE S W. Self-augmentation: Generalizing deep networks to unseen classes for few-shot learning[J]. Neural networks, 2021, 138: 140–149.
[27] MANGLA P, SINGH M, SINHA A, et al. Charting the right manifold: manifold mixup for few-shot learning[C]//2020 IEEE Winter Conference on Applications of Computer Vision. Snowmass: IEEE, 2020: 2207?2216.
[28] LIU Jialin, CHAO Fei, LIN C M. Task augmentation by rotating for meta-learning[EB/OL]. (2020?02?08)[2022?03?01].https://arxiv.org/abs/2003.00804.
[29] KIM J H, ON K W, LIM W, et al. Hadamard product for low-rank bilinear pooling[C] //Proceedings of the International Conference on Learning Representations. LaJolla: ICLR, 2017 : 04325.
[30] PENNINGTON J, SOCHER R, MANNING C. Glove: global vectors for word representation[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Doha: Association for Computational Linguistics, 2014: 1532?1543.
[31] LIU Chen, FU Yanwei, XU Chengming, et al. Learning a few-shot embedding model with contrastive learning[J]. Proceedings of the AAAI conference on artificial intelligence, 2021, 35(10): 8635–8643.
[32] REN Mengye, TRIANTAFILLOU E, RAVI S, et al. Meta-learning for semi-supervised few-shot classification[EB/OL]. (2018?03?02)[2022?03?01].https://arxiv.org/abs/1803.00676.
[33] KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1. New York: ACM, 2012: 1097-1105.
[34] KRIZHEVSKY A. Learning multiple layers of features from tiny images[M]. Cambridge: MIT Press, 2009.
[35] LEE K, MAJI S, RAVICHANDRAN A, et al. Meta-learning with differentiable convex optimization[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 10649?10657.
[36] RUSU A A, RAO D, SYGNOWSKI J, et al. Meta-Learning with Latent Embedding Optimization[C]//International Conference on Learning Representations. LaJolla: ICLR, 2018.
[37] WANG Yikai, XU Chengming, LIU Chen, et al. Instance credibility inference for few-shot learning[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 12833?12842.
[38] TIAN Yonglong, WANG Yue, KRISHNAN D, et al. Rethinking few-shot image classification: a good embedding is all you need? [M]//Computer Vision ECCV 2020. Cham: Springer International Publishing, 2020: 266?282.
[39] ZHOU Ziqi, QIU Xi, XIE Jiangtao, et al. Binocular mutual learning for improving few-shot classification[C]//2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2021: 8382-8391.
[40] WAH C, BRANSON S, WELINDER P, et al. The caltech-ucsd birds-200-2011 dataset[EB/OL]. (2011?10?26)[2022?03?01]. https://authors.library.caltech.edu/27452.
[41] LIU Bin, CAO Yue, LIN Yutong, et al. Negative margin matters: understanding margin in few-shot classification[M]//Computer Vision – ECCV 2020. Cham: Springer International Publishing, 2020: 438?455.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems