[1]颜文靖,蒋柯,傅小兰.心理学视角下的自动表情识别[J].智能系统学报,2022,17(5):1039-1053.[doi:10.11992/tis.202112056]
 YAN Wenjing,JIANG Ke,FU Xiaolan.Automatic facial expression recognition from a psychological perspective[J].CAAI Transactions on Intelligent Systems,2022,17(5):1039-1053.[doi:10.11992/tis.202112056]
点击复制

心理学视角下的自动表情识别

参考文献/References:
[1] DARWIN C. The expression of the emotions in man and animals[M]. New York: Appleton and Company, 1872.
[2] EKMAN P. An argument for basic emotions[J]. Cognition and emotion, 1992, 6(3/4): 169–200.
[3] EKMAN P, CORDARO D. What is meant by calling emotions basic[J]. Emotion review, 2011, 3(4): 364–370.
[4] SARIYANIDI E, GUNES H, CAVALLARO A. Automatic analysis of facial affect: a survey of registration, representation, and recognition[J]. IEEE transactions on pattern analysis and machine intelligence, 2015, 37(6): 1113–1133.
[5] CRIVELLI C, FRIDLUND A J. Inside-out: from basic emotions theory to the behavioral ecology view[J]. Journal of nonverbal behavior, 2019, 43(2): 161–194.
[6] LI Shan, DENG Weihong. Deep facial expression recognition: a survey[J]. IEEE transactions on affective computing, 2020(99): 1.
[7] NOROOZI F, CORNEANU C A, KAMI?SKA D, et al. Survey on emotional body gesture recognition[J]. IEEE transactions on affective computing, 2021, 12(2): 505–523.
[8] SAILUNAZ K, DHALIWAL M, ROKNE J, et al. Emotion detection from text and speech: a survey[J]. Social network analysis and mining, 2018, 8(1): 1–26.
[9] ZHAO Huijuan, YE Ning, WANG Ruchuan. A survey on automatic emotion recognition using audio big data and deep learning architectures[C]//2018 IEEE 4th International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing, (HPSC) and IEEE International Conference on Intelligent Data and Security. Omaha, IEEE, 2018: 139?142.
[10] SHU Lin, XIE Jinyan, YANG Mingyue, et al. A review of emotion recognition using physiological signals[J]. Sensors (Basel, Switzerland), 2018, 18(7)–2074.
[11] ALARC?O S M, FONSECA M J. Emotions recognition using EEG signals: a survey[J]. IEEE transactions on affective computing, 2019, 10(3): 374–393.
[12] SCHACHTER S, SINGER J E. Cognitive, social, and physiological determinants of emotional state[J]. Psychological review, 1962, 69: 379–399.
[13] OSGOOD C E. Dimensionality of the semantic space for communication via facial expressions[J]. Scandinavian journal of psychology, 1966, 7(1): 1–30.
[14] MEHRABIAN A, RUSSELL J A. An approach to environmental psychology[M]. [S. l. ]: The MIT Press, 1974.
[15] RUSSELL J A. A circumplex model of affect[J]. Journal of personality and social psychology, 1980, 39(6): 1161–1178.
[16] 李晓明, 傅小兰, 邓国峰. 中文简化版PAD情绪量表在京大学生中的初步试用[J]. 中国心理卫生杂志, 2008, 22(5): 327–329
LI Xiaoming, FU Xiaolan, DENG Guofeng. Preliminary application of the abbreviated PAD emotion scale to Chinese undergraduates[J]. Chinese mental health journal, 2008, 22(5): 327–329
[17] WATSON D, TELLEGEN A. Toward a consensual structure of mood[J]. Psychological bulletin, 1985, 98(2): 219–235.
[18] CORDARO D T, KELTNER D, TSHERING S, et al. The voice conveys emotion in ten globalized cultures and one remote village in Bhutan[J]. Emotion (Washington, D C), 2016, 16(1): 117–128.
[19] CORDARO D T, SUN Rui, KELTNER D, et al. Universals and cultural variations in 22 emotional expressions across five cultures[J]. Emotion (Washington, D C), 2018, 18(1): 75–93.
[20] 梁静, 颜文靖, 吴奇, 等. 微表情研究的进展与展望[J]. 中国科学基金, 2013, 27(2): 75–78, 82
LIANG Jing, YAN Wenjing, WU Qi, et al. Recent advances and future trends in microexpression research[J]. Bulletin of National Natural Science Foundation of China, 2013, 27(2): 75–78, 82
[21] EKMAN P, FRIESEN W V. Nonverbal leakage and clues to deception[J]. Psychiatry, 1969, 32(1): 88–106.
[22] YAN Wenjing, WU Qi, LIANG Jing, et al. How fast are the leaked facial expressions: the duration of micro-expressions[J]. Journal of nonverbal behavior, 2013, 37(4): 217–230.
[23] EKMAN P. Darwin’s contributions to our understanding of emotional expressions[J]. Philosophical transactions of the Royal Society of London Series B, Biological sciences, 2009, 364(1535): 3449–3451.
[24] HAGGARD E A, ISAACS K S. Micromomentary facial expressions as indicators of ego mechanisms in psychotherapy[M]// Methods of Research in Psychotherapy. Boston, MA: Springer, 1966: 154?165.
[25] RINN W E. The neuropsychology of facial expression: a review of the neurological and psychological mechanisms for producing facial expressions[J]. Psychological bulletin, 1984, 95(1): 52–77.
[26] EKMAN P. Lie catching and microexpressions[M]//The Philosophy of Deception. [S. l. ]: Oxford University Press, 2009: 118?136.
[27] DU Shichuan, TAO Yong, MARTINEZ A M. Compound facial expressions of emotion[J]. PNAS, 2014, 111(15): E1454–E1462.
[28] LI Shan, DENG Weihong. Blended emotion in-the-wild: multi-label facial expression recognition using crowdsourced annotations and deep locality feature learning[J]. International journal of computer vision, 2019, 127(6/7): 884–906.
[29] KELTNER D, TRACY J, SAUTER D A, et al. Expression of emotion[J]. Handbook of emotions, 2016: 467–482.
[30] KELTNER D. Signs of appeasement: evidence for the distinct displays of embarrassment, amusement, and shame[J]. Journal of personality and social psychology, 1995, 68(3): 441–454.
[31] TRACY J L, ROBINS R W. Putting the self into self-conscious emotions: a theoretical model[J]. Psychological Inquiry, 2004, 15(2): 103–125.
[32] CAMPOS B, SHIOTA M N, KELTNER D, et al. What is shared, what is different? Core relational themes and expressive displays of eight positive emotions[J]. Cognition & emotion, 2013, 27(1): 37–52.
[33] KELTNER D, CORDARO D T. Understanding multimodal emotional expressions[EB/OL]. The science of facial expression: Oxford University Press, 2017: 57?75. [2020?01?01] . http://emotionresearcher.com/wp-content/uploads/2015/08/Keltner-and-Cordaros-Original-Paper-With-Changed-Text-Highlighted.pdf
[34] MOLLAHOSSEINI A, HASANI B, MAHOOR M H. AffectNet: a database for facial expression, valence, and arousal computing in the wild[J]. IEEE transactions on affective computing, 2019, 10(1): 18–31.
[35] YAN Wenjing, LI Shan, QUE Chengtao, et al. RAF-AU Database: In-the-Wild Facial Expressions with Subjective Emotion Judgement and Objective AU Annotations[C]//Asian Conference on Computer Vision. Cham: Springer, 2021: 68?82.
[36] LUCEY P, COHN J F, KANADE T, et al. The Extended Cohn-Kanade Dataset (CK): a complete dataset for action unit and emotion-specified expression[C]//2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops. San Francisco, IEEE, 2010: 94?101.
[37] LYONS M, AKAMATSU S, KAMACHI M, et al. Coding facial expressions with Gabor wavelets[C]//Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition. Nara, Japan. IEEE, 1998: 200?205.
[38] AIFANTI N, PAPACHRISTOU C, DELOPOULOS A. The MUG facial expression database[C]//11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10. Desenzano del Garda, Italy. IEEE, 2010: 1?4.
[39] LANGNER O, DOTSCH R, BIJLSTRA G, et al. Presentation and validation of the radboud faces database[J]. Cognition and emotion, 2010, 24(8): 1377–1388.
[40] MAVADATI S M, MAHOOR M H, BARTLETT K, et al. DISFA: a spontaneous facial action intensity database[J]. IEEE transactions on affective computing, 2013, 4(2): 151–160.
[41] SNEDDON I, MCRORIE M, MCKEOWN G, et al. The Belfast induced natural emotion database[J]. IEEE transactions on affective computing, 2012, 3(1): 32–41.
[42] VALSTAR M, PANTIC M. Induced disgust, happiness and surprise: an addition to the MMI facial expression database[C]//Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect. 2010: 65.
[43] GROSS R, MATTHEWS I, COHN J, et al. Multi-PIE[J]. Image and vision computing, 2010, 28(5): 807–813.
[44] BEN X, REN Y, ZHANG J, et al. Video-based facial micro-expression analysis: a survey of datasets, features and algorithms[J]. IEEE transactions on pattern analysis and machine intelligence, 2021.
[45] QU F, WANG S-J, YAN W-J, et al. CAS (ME)^2: A Database for Spontaneous Macro-Expression and Micro-Expression Spotting and Recognition[J]. IEEE transactions on affective computing, 2017, 9(4): 424–436.
[46] YAN Wenjing, QI Wu, LIU Yongjin, et al. CASME database: a dataset of spontaneous micro-expressions collected from neutralized faces[C]//2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition. Shanghai, IEEE, 2013: 1?7.
[47] YAN Wenjing, LI Xiaobai, WANG Sujing, et al. CASME II: an improved spontaneous micro-expression database and the baseline evaluation[J]. PLoS one, 2014, 9(1)–e86041.
[48] MO Fan, ZHANG Zhihao, CHEN Tong, et al. MFED: a database for masked facial expression[J]. IEEE access, 9: 96279–96287.
[49] BENITEZ-QUIROZ C F, SRINIVASAN R, MARTINEZ A M. EmotioNet: an accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, IEEE, 2016: 5562?5570.
[50] LI Shan, DENG Weihong, DU Junping. Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, IEEE, 2017: 2584?2593.
[51] DHALL A, GOECKE R, GHOSH S, et al. From individual to group-level emotion recognition: EmotiW 5.0[C]//ICMI ’17: Proceedings of the 19th ACM International Conference on Multimodal Interaction. New York: ACM, 2017: 524?528.
[52] DHALL A, MURTHY O V R, GOECKE R, et al. Video and image based emotion recognition challenges in the wild: EmotiW 2015[C]//ICMI ’15: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. New York: ACM, 2015: 423?426.
[53] GUO Hui, ZHANG Xiaohui, LIANG Jun, et al. The dynamic features of lip corners in genuine and posed smiles[J]. Frontiers in psychology, 2018, 9: 202.
[54] EKMAN P, FRIESEN W V. Felt, false, and miserable smiles[J]. Journal of nonverbal behavior, 1982, 6(4): 238–252.
[55] HESS U, KLECK R E. Differentiating emotion elicited and deliberate emotional facial expressions[J]. European journal of social psychology, 1990, 20(5): 369–385.
[56] SCHMIDT K L, BHATTACHARYA S, DENLINGER R. Comparison of deliberate and spontaneous facial movement in smiles and eyebrow raises[J]. Journal of nonverbal behavior, 2009, 33(1): 35–45.
[57] O’REILLY H, PIGAT D, FRIDENSON S, et al. The EU-Emotion Stimulus Set: a validation study[J]. Behavior research methods, 2016, 48(2): 567–576.
[58] ZHALEHPOUR S, ONDER O, AKHTAR Z, et al. BAUM-1: a spontaneous audio-visual face database of affective and mental states[J]. IEEE transactions on affective computing, 2017, 8(3): 300–313.
[59] SOLEYMANI M, GARCIA D, JOU B, et al. A survey of multimodal sentiment analysis[J]. Image and vision computing, 2017, 65: 3–14.
[60] REISENZEIN R, STUDTMANN M, HORSTMANN G. Coherence between emotion and facial expression: evidence from laboratory experiments[J]. Emotion review, 2013, 5(1): 16–23.
[61] ROSENBERG E L, EKMAN P. Coherence between expressive and experiential systems in emotion[J]. Cognition and emotion, 1994, 8(3): 201–229.
[62] QU Fangbing, YAN Wenjing, CHEN Yunsin, et al. “You Should Have Seen the Look on Your Face…”: Self-awareness of Facial Expressions[J]. Frontiers in psychology, 2017, 8: 832.
[63] EKMAN P, FRIESEN W, HAGER J. FACS Investigator’s Guide (The Manual on CD Rom)[M]. Salt Lake: Network Information Research Corporation, 2002.
[64] BENITEZ-QUIROZ C F, SRINIVASAN R, MARTINEZ A M. Discriminant functional learning of color features for the recognition of facial action units and their intensities[J]. IEEE transactions on pattern analysis and machine intelligence, 2019, 41(12): 2835–2845.
[65] CHU Wensheng, DE LA T, COHN J F. Learning spatial and temporal cues for multi-label facial action unit detection[C]//017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017). Washington, IEEE Computer Society, 2017: 25?32.
[66] WANG Shangfei, PENG Guozhu, CHEN Shiyu, et al. Weakly supervised facial action unit recognition with domain knowledge[J]. IEEE transactions on cybernetics, 2018, 48(11): 3265–3276.
[67] WANG Pengcheng, WANG Zihao, JI Zhilong, et al. TAL EmotioNet challenge 2020 rethinking the model chosen problem in multi-task learning[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Seattle, IEEE, 2020: 1653?1656.
[68] KOSSAIFI J, TZIMIROPOULOS G, TODOROVIC S, et al. AFEW-VA database for valence and arousal estimation in-the-wild[J]. Image and vision computing, 2017, 65: 23–36.
[69] CARROLL J M, RUSSELL J A. Do facial expressions signal specific emotions? Judging emotion from the face in context[J]. Journal of personality and social psychology, 1996, 70(2): 205–218.
[70] DURáN J I, REISENZEIN R, FERNáNDEZ-DOLS J M. Coherence Between Emotions and Facial Expressions[M]. [S. l. ]: Oxford University Press, 2017.
[71] DURáN J I, FERNáNDEZ-DOLS J M. Do emotions result in their predicted facial expressions? A meta-analysis of studies on the co-occurrence of expression and emotion[J]. Emotion (Washington, D C), 2021, 21(7): 1550–1569.
[72] BARRETT L F, ADOLPHS R, MARSELLA S, et al. Emotional expressions reconsidered: challenges to inferring emotion from human facial movements[J]. Psychological science in the public interest: a journal of the American Psychological Society, 2019, 20(1): 1–68.
[73] BARRETT L F. Psychological construction: the Darwinian approach to the science of emotion[J]. Emotion review, 2013, 5(4): 379–389.
[74] ZHANG Xing, YIN Lijun, COHN J F, et al. BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database[J]. Image and vision computing, 2014, 32(10): 692–706.
[75] AVIEZER H, TROPE Y, TODOROV A. Body cues, not facial expressions, discriminate between intense positive and negative emotions[J]. Science, 2012, 338(6111): 1225–1229.
[76] BARRETT L F, MESQUITA B, GENDRON M. Context in emotion perception[J]. Current directions in psychological science, 2011, 20(5): 286–290.
[77] AVIEZER H, HASSIN R, BENTIN S, et al. Putting facial expressions back in context[EB/OL]. 2008. [2020–01–01]. http://cel.huji.ac.il/publications/pdfs/Aviezer_et_al_2008_Chapter_in_First_Impressions.pdf
[78] RUSSELL J A. Core affect and the psychological construction of emotion[J]. Psychological review, 2003, 110(1): 145–172.
[79] VALSTAR M, SCHULLER B, SMITH K, et al. AVEC 2014: 3D dimensional affect and depression recognition challenge[C]//AVEC ’14: Proceedings of the 4th International Workshop on Audio/Visual Emotion Challenge. New York: ACM, 2014: 3?10.
[80] FIQUER J T, MORENO R A, BRUNONI A R, et al. What is the nonverbal communication of depression? Assessing expressive differences between depressive patients and healthy volunteers during clinical interviews[J]. Journal of affective disorders, 2018, 238: 636–644.
[81] RUSSELL J A, FERNA?NDEZ DOLS J M. The science of facial expression[M]. New York: Oxford University Press, 2017.
[82] CRIVELLI C, FRIDLUND A J. Facial displays are tools for social influence[J]. Trends in cognitive sciences, 2018, 22(5): 388–399.
[83] CAMPBELL R L. Constructive processes: abstraction, generalization, and dialectics[M]//The Cambridge Companion to Piaget. Cambridge: Cambridge University Press, 2009: 150?170.
[84] CLARK A. Whatever next? Predictive brains, situated agents, and the future of cognitive science[J]. The Behavioral and brain sciences, 2013, 36(3): 181–204.
[85] CLARK A. Predicting peace: The end of the representation wars[M]. Open MIND. Frankfurt am Main: MIND Group, 2015.
[86] KWISTHOUT J, BEKKERING H, VAN ROOIJ I. To be precise, the details don’t matter: on predictive processing, precision, and level of detail of predictions[J]. Brain and cognition, 2017, 112: 84–91.

备注/Memo

收稿日期:2021-12-30。
基金项目:温州市科技计划项目(G20210027).
作者简介:颜文靖,副教授,主要研究方向为情绪与表情、测谎以及心理健康。主持国家自然科学基金项目和浙江省自然科学基金项目各一项,获2018年度第八届吴文俊人工智能科学技术自然科学奖一等奖。第一作者论文在Google Scholar上被引1000余次;蒋柯,教授,中国心理学学会理论心理学与心理学史分会委员,浙江省社会心理学会理事。主要研究方向为推理与决策的认知行为特征、人工智能的理论基础与认知逻辑、心灵哲学、进化心理学。主持国家社会科学基金项目、教育部项目,以及温州市哲学社会科学项目等课题。发表学术论文50余篇;出版专著、译著和教材等共7部;傅小兰,研究员,中国心理学会常务理事、原理事长、原秘书长,国务院学位委员会学科评议组心理学组成员,《心理学报》主编,主要研究方向为认知心理学、情绪心理学和说谎心理学。2013年和2017年分别获北京市科学技术奖二等奖,2018年获吴文俊人工智能科学技术奖自然科学奖一等奖,2021年获得中国电子学会科学技术奖技术发明一等奖。承担和参与科技项目30余项,发表学术论文380余篇,主编著作10部和译著13部
通讯作者:傅小兰. E-mail:fuxl@psych.ac.cn

更新日期/Last Update: 1900-01-01
Copyright © 《 智能系统学报》 编辑部
地址:(150001)黑龙江省哈尔滨市南岗区南通大街145-1号楼 电话:0451- 82534001、82518134 邮箱:tis@vip.sina.com