[1]LIANG Yan,WEN Xing,PAN Jiahui.Cross-dataset facial expression recognition method fusing global and local features[J].CAAI Transactions on Intelligent Systems,2023,18(6):1205-1212.[doi:10.11992/tis.202212030]
Copy

Cross-dataset facial expression recognition method fusing global and local features

References:
[1] MEHRABIAN A. Communication without words[M]//Communication theory. [S.l.]: Routledge, 2017: 193-200.
[2] LI Shan, DENG Weihong. Deep facial expression recognition: a survey[J]. IEEE transactions on affective computing, 2022, 13(3): 1195–1215.
[3] XU Ruijia, LI Guanbin, YANG Jihan, et al. Larger norm more transferable: an adaptive feature norm approach for unsupervised domain adaptation[C]//2019 IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2020: 1426-1435.
[4] LEE Chenyu, BATRA T, BAIG M H, et al. Sliced Wasserstein discrepancy for unsupervised domain adaptation[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2020: 10277-10287.
[5] 莫宏伟, 傅智杰. 基于迁移学习的无监督跨域人脸表情识别[J]. 智能系统学报, 2021, 16(3): 397–406
MO Hongwei, FU Zhijie. Unsupervised cross-domain expression recognition based on transfer learning[J]. CAAI transactions on intelligent systems, 2021, 16(3): 397–406
[6] LONG Mingsheng, CAO Yue, WANG Jianmin, et al. Learning transferable features with deep adaptation networks[C]//32nd International Conference on Machine Learning. Lille: ICML, 2015, 1: 97-105.
[7] LI Shan, DENG Weihong. A deeper look at facial expression dataset bias[J]. IEEE transactions on affective computing, 2022, 13(2): 881–893.
[8] XU Xiaolin, ZHENG Wenming, ZONG Yuan, et al. Sample self-revised network for cross-dataset facial expression recognition[C]//2022 International Joint Conference on Neural Networks. Padua: IEEE, 2022: 1-8.
[9] XU Xiaolin, ZONG Yuan, LU Cheng, et al. Enhanced sample self-revised network for cross-dataset facial expression recognition[J]. Entropy, 2022, 24(10): 1475.
[10] CHEN Tianshui, PU Tao, WU Hefeng, et al. Cross-domain facial expression recognition: a unified evaluation benchmark and adversarial graph learning[J]. IEEE transactions on pattern analysis and machine intelligence, 2022, 44(12): 9887–9903.
[11] GANIN Y, LEMPITSKY V. Unsupervised domain adaptation by backpropagation[C]//Proceedings of the 32nd International Conference on International Conference on Machine Learning. New York: ACM, 2015: 1180-1189.
[12] LONG Mingsheng, CAO Zhangjie, WANG Jianmin, et al. Conditional adversarial domain adaptation[C]//Proceedings of the 32nd International Conference on Neural Information Processing Systems. Montréal: ACM, 2018: 1647-1657.
[13] WANG Chao, DING Jundi, YAN Hui, et al. A prototype-oriented contrastive adaption network for cross-domain facial expression recognition[C]//Asian Conference on Computer Vision. Cham: Springer, 2023: 324-340.
[14] XIE Yuan, CHEN Tianshui, PU Tao, et al. Adversarial graph representation adaptation for cross-domain facial expression recognition[C]//Proceedings of the 28th ACM International Conference on Multimedia. Seattle: ACM, 2020: 1255-1264.
[15] TIAN Yingli, KANADE T, COHN J F. Recognizing action units for facial expression analysis[J]. IEEE transactions on pattern analysis and machine intelligence, 2001, 23(2): 97–115.
[16] ZHANG Kaipeng, ZHANG Zhanpeng, LI Zhifeng, et al. Joint face detection and alignment using multitask cascaded convolutional networks[J]. IEEE signal processing letters, 2016, 23(10): 1499–1503.
[17] HE Kaiming, ZHANG Xiangyu, REN Shaoqing, et al. Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778.
[18] ARNAB A, DEHGHANI M, HEIGOLD G, et al. ViViT: a video vision transformer[C]//2021 IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2022: 6816-6826.
[19] KHAN S, NASEER M, HAYAT M, et al. Transformers in vision: a survey[J]. ACM computing surveys, 2022, 54(10s): 1–41.
[20] LUCEY P, COHN J F, KANADE T, et al. The extended cohn-kanade dataset (CK): a complete dataset for action unit and emotion-specified expression[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Francisco: IEEE, 2010: 94-101.
[21] LYONS M, AKAMATSU S, KAMACHI M, et al. Coding facial expressions with Gabor wavelets[C]//Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition. Nara: IEEE, 2002: 200-205.
[22] DHALL A, GOECKE R, LUCEY S, et al. Static facial expression analysis in tough conditions: data, evaluation protocol and benchmark[C]//2011 IEEE International Conference on Computer Vision Workshops. Barcelona: IEEE, 2012: 2106-2112.
[23] GOODFELLOW I J, ERHAN D, LUC CARRIER P, et al. Challenges in representation learning: a report on three machine learning contests[J]. Neural networks, 2015, 64: 59–63.
[24] ZHANG Zhanpeng, LUO Ping, LOY C C, et al. Learning social relation traits from face images[C]//2015 IEEE International Conference on Computer Vision. Santiago: IEEE, 2016: 3631-3639.
[25] LI Shan, DENG Weihong, DU Junping. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 2584-2593.
[26] LI Shan, DENG Weihong. Deep emotion transfer network for cross-database facial expression recognition[C]//2018 24th International Conference on Pattern Recognition. Beijing: IEEE, 2018: 3092-3099.
[27] LAURENS V D M, HINTON G. Visualizing data using t-SNE[J]. Journal of machine learning research, 2008, 9(2605): 2579–2605.
Similar References:

Memo

-

Last Update: 1900-01-01

Copyright © CAAI Transactions on Intelligent Systems