[1]潘家辉,何志鹏,李自娜,等.多模态情绪识别研究综述[J].智能系统学报,2020,15(4):633-645.[doi:10.11992/tis.202001032]
 PAN Jiahui,HE Zhipeng,LI Zina,et al.A review of multimodal emotion recognition[J].CAAI Transactions on Intelligent Systems,2020,15(4):633-645.[doi:10.11992/tis.202001032]
点击复制

多模态情绪识别研究综述(/HTML)
分享到:

《智能系统学报》[ISSN:1673-4785/CN:23-1538/TP]

卷:
第15卷
期数:
2020年4期
页码:
633-645
栏目:
综述
出版日期:
2020-07-05

文章信息/Info

Title:
A review of multimodal emotion recognition
作者:
潘家辉1 何志鹏1 李自娜2 梁艳1 邱丽娜1
1. 华南师范大学 软件学院,广东 佛山 528225;
2. 华南师范大学 计算机学院,广东 广州 510641
Author(s):
PAN Jiahui1 HE Zhipeng1 LI Zina2 LIANG Yan1 QIU Lina1
1. School of Software, South China Normal University, Foshan 528225, China;
2. School of Computer, South China Normal University, Guangzhou 510641, China
关键词:
情绪识别情绪描述模型情绪诱发方式信息融合融合策略情绪表征模态混合
Keywords:
emotion recognitionemotion description modelemotion inducing modeinformation fusionfusion strategyemotion representationmodality blend
分类号:
TP391.4
DOI:
10.11992/tis.202001032
摘要:
本文针对多模态情绪识别这一新兴领域进行综述。首先从情绪描述模型及情绪诱发方式两个方面对情绪识别的研究基础进行了综述。接着针对多模态情绪识别中的信息融合这一重难点问题,从数据级融合、特征级融合、决策级融合、模型级融合4种融合层次下的主流高效信息融合策略进行了介绍。然后从多种行为表现模态混合、多神经生理模态混合、神经生理与行为表现模态混合这3个角度分别列举具有代表性的多模态混合实例,全面合理地论证了多模态相较于单模态更具情绪区分能力和情绪表征能力,同时对多模态情绪识别方法转为工程技术应用提出了一些思考。最后立足于情绪识别研究现状的分析和把握,对改善和提升情绪识别模型性能的方式和策略进行了深入的探讨与展望。
Abstract:
This paper reviews the emerging field of multimodal emotion recognition. Firstly, the research foundation of emotion recognition is summarized from two aspects: emotion description model and emotion-inducing mode. Then, aiming at the key and difficult problem of information fusion in multi-modal emotion recognition, some mainstream and high-efficiency information fusion strategies are introduced from four fusion levels: data-level fusion, feature-level fusion, decision-level fusion, and model-level fusion. By exemplifying representative multi-modal mixing examples from three perspectives: the mixing of multiple external presentation modalities, the mixing of multiple neurophysiological modalities, and the mixing of neurophysiology and external presentation modalities, it fully demonstrates that multi-modality is more capable of emotional discrimination and emotional representation than single-modality. At the same time, some thoughts on the conversion of multi-modal recognition methods to engineering technology applications are put forward. Finally, based on the analysis and grasp of the current situation of emotion recognition research, the ways and strategies for improving and enhancing the performance of the emotion recognition models are discussed and prospected.

参考文献/References:

[1] PICARD R W, HEALEY J. Affective wearables[J]. Personal technologies, 1997, 1(4): 231-240.
[2] PAN J, XIE Q, HUANG H, et al. Emotion-related consciousness detection in patients with disorders of consciousness through an EEG-Based BCI system[J]. Frontiers in human neuroence, 2018, 12: 198-209.
[3] HUANG H, XIE Q, PAN J, et al. An EEG-based brain computer interface for emotion recognition and its application in patients with disorder of consciousness[J]. IEEE transactions on affective computing, 2019, 99: 1-10.
[4] WANG S, PHILLIPS P, DONG Z, et al. Intelligent facial emotion recognition based on stationary wavelet entropy and Jaya algorithm[J]. Neurocomputing, 2018, 272: 668-676.
[5] WANG W, WU J. Notice of retraction emotion recognition based on CSO&SVM in e-learning[C]//Poceedings of the 2011 Seventh International Conference on Natural Computation. Shanghai, China, 2011: 566-570.
[6] LIU W, QIAN J, YAO Z, et al. Convolutional two-stream network using multi-facial feature fusion for driver fatigue detection[J]. Future internet, 2019, 11(5): 115.
[7] BORIL H, OMID SADJADI S, KLEINSCHMIDT T, et al. Analysis and detection of cognitive load and frustration in drivers’ speech[J]. Proceedings of interspeech, 2010: 502-505.
[8] 陆怡菲. 基于脑电信号和眼动信号融合的多模态情绪识别研究[D]. 上海: 上海交通大学, 2017.
LU Yifei. Research on multi-modal emotion recognition based on eeg and eye movement signal fusion[D]. Shanghai: Shanghai Jiaotong University, 2017.
[9] LIU Z, WU M, TAN G, et al. Speech emotion recognition based on feature selection and extreme learning machine decision tree[J]. Neurocomputing, 2018, 10: 271-280.
[10] LIU Z, WU M, CAO W, et al. A facial expression emotion recognition based human-robot interaction system[J]. Journal of automation: english version, 2017, 4(4): 668-676.
[11] LIU Z, PAN F, WU M, et al. A multimodal emotional communication based humans-robots interaction system[C]//Poceedings of the Control Conference. Chengdu, China, 2016: 6363-6368.
[12] CHEN S, JIN Q. Multi-modal conditional attention fusion for dimensional emotion prediction[C]//Proceedings of the 24th ACM International Conference on Multimedia. Amsterdam, the Netherlands, 2016: 571-575.
[13] CHEN S, LI X, JIN Q, et al. Video emotion recognition in the wild based on fusion of multimodal features[C]// Proceedings of the 18th ACM International Conference on Mmultimodal Interaction. Tokyo, Japan, 2016: 494-500.
[14] ZHANG X, SHEN J, DIN Z U, et al. Multimodal depression detection: fusion of electroencephalography and paralinguistic behaviors using a novel strategy for classifier ensemble[J]. IEEE journal of biomedical and health informatics, 2019, 23(6): 2265-2275.
[15] ZONG Y, ZHENG W, HUANG X, et al. Emotion recognition in the wild via sparse transductive transfer linear discriminant analysis[J]. Journal on multimodal user interfaces, 2016, 10(2): 163-172.
[16] ZHANG T, ZHENG W, CUI Z, et al. Spatial-temporal recurrent neural network for emotion recognition[J]. IEEE transactions on systems, man, and cybernetics, 2019, 49(3): 839-847.
[17] ZHENG W, LIU W, LU Y, et al. Emotionmeter: A multimodal framework for recognizing human emotions[J]. IEEE transactions on cybernetics, 2018, 49(3): 1110-1122.
[18] ZHENG W, ZHU J, LU B. Identifying stable patterns over time for emotion recognition from EEG[J]. IEEE transactions on affective computing, 2019, 10(3): 417-429.
[19] YAN X, ZHENG W, LIU W, et al. Investigating Gender differences of brain areas in emotion recognition using LSTM neural network[C]//Poceedings of the International Conference on Neural Information Processing. Guangzhou, China, 2017: 820-829.
[20] LI J, QIU S, SHEN Y, et al. Multisource transfer learning for cross-subject EEG emotion recognition[J]. IEEE transactions on systems, man, and cybernetics, 2019, 50(7): 1-13.
[21] DU Changde, DU Changying, LI J, et al. Semi-supervised bayesian deep multi-modal emotion recognition[J]. arXiv preprint arXiv: 170407548, 2017.
[22] 程静. 基本情感生理信号的非线性特征提取研究[D]. 重庆: 西南大学, 2015.
CHENG Jing. Research on nonlinear feature extraction of basic emotional physiological signals[D]. Chongqing: Southwest University, 2015.
[23] 温万惠. 基于生理信号的情感识别方法研究[D]. 重庆: 西南大学, 2010.
WEN Wanhui. Research on emotion recognition method based on physiological signals[D]. Chongqing, Southwest university, 2010.
[24] PICARD R W. Affective computing: challenges[J]. International journal of human-computer studies, 2003, 59(1-2): 55-64.
[25] EKMAN P E, DAVIDSON R J. The nature of emotion: fundamental questions[M]. Oxford: Oxford university press, 1994.
[26] 高庆吉, 赵志华, 徐达, 等. 语音情感识别研究综述[J]. 智能系统学报, 2020, 15(1): 1-13
GAO Qingji, ZHAO Zhihua, XU Da, et al. Review on speech emotion recognition research[J]. CAAI transactions on intelligent systems, 2020, 15(1): 1-13
[27] JOHNSTON V S. Why we feel: The science of human emotions[M]. New York: Perseus publishing, 1999.
[28] RUSSELL J A. A circumplex model of affect[J]. Journal of personality and social psychology, 1980, 39(6): 1161.
[29] MEHRABIAN A. Basic dimensions for a general psychological theory: Implications for personality, social, environmental, and developmental studies[M]. Cambridge: Oelgeschlager Gunn & Hain Cambridge, MA, 1980.
[30] ORTONY A, CLORE G L, COLLINS A. The cognitive structure of emotion[J]. Contemporary sociology, 1988, 18(6): 2147-2153.
[31] PICARD R W. Affective computing[M]. Cambridge: MIT press, 2000.
[32] VAN KESTEREN A, OPDEN AKKER R, POEL M, et al. Simulation of emotions of agents in virtual environments using neural networks[J]. Learning to behave: internalising knowledge, 2000: 137-147.
[33] PLUTCHIK R. Emotions and life: Perspectives from psychology, biology, and evolution[M]. Washington: American Psychological Association, 2003.
[34] IZARD. Human emotions[M]. Berlin: Springer Science & Business Media, 2013.
[35] ZHUANG N, ZENG Y, YANG K, et al. Investigating patterns for self-induced emotion recognition from EEG signals[J]. Sensors, 2018, 18(3): 841.
[36] IACOVIELLO D, PETRACCA A, SPEZIALETTI M, et al. A real-time classification algorithm for EEG-based BCI driven by self-induced emotions[J]. Computer methods and programs in biomedicine, 2015, 122(3): 293-303.
[37] RIZZOLATTI G, CRAIGHERO L. The mirror-neuron system[J]. Annu rev neurosci, 2004, 27: 169-192.
[38] LANG P J, BRADLEY M M, CUTHBERT B N. International affective picture system (IAPS): Technical manual and affective ratings[J]. NIMH center for the study of emotion and attention, 1997, 1: 39-58.
[39] BRADLEY M, LANG P J. The International affective digitized sounds (IADS)[M]. Rockville: NIMH center, 1999.
[40] KOELSTRA S, MUHL C, SOLEYMANI M, et al. Deap: A database for emotion analysis; using physiological signals[J]. IEEE transactions on affective computing, 2011, 3(1): 18-31.
[41] SOLEYMANI M, LICHTENAUER J, PUN T, et al. A multimodal database for affect recognition and implicit tagging[J]. IEEE transactions on affective computing, 2012, 3(1): 42-55.
[42] MARTIN O, KOTSIA I, MACQ B. The eNTERFACE’05 audio-visual emotion database[C]//Poceedings of the international conference on data engineering workshops IEEE computer society. Atlanta, USA, 2006: 8.
[43] 何俊, 刘跃, 何忠文. 多模态情感识别研究进展[J]. 计算机应用研究, 2018, 35(11): 3201-3205
HE Jun, LIU Yue, HE Zhongwen. Research progress of multimodal emotion recognition[J]. Computer application research, 2018, 35(11): 3201-3205
[44] D’MELLO S K, KORY J. A review and meta-analysis of multimodal affect detection systems[J]. ACM computing surveys (CSUR), 2015, 47(3): 43.
[45] PORIA S, CAMBRIA E, BAJPAI R, et al. A review of affective computing: from unimodal analysis to multimodal fusion[J]. Information fusion, 2017, 37: 98-125.
[46] 黄泳锐, 杨健豪, 廖鹏凯, 等. 结合人脸图像和脑电的情绪识别技术[J]. 计算机系统应用, 2018, 27(2): 9-15
HUANG Yongrui, YANG Jianhao, LIAO Pengkai, et al. Emotion recognition technology combining face image and EEG[J]. Computer system application, 2018, 27(2): 9-15
[47] 孙皓莹, 蒋静坪. 基于参数估计的多传感器数据融合[J]. 传感器技术, 1995, 6: 32-36
SUN Haoying, JIANG Jingping. Multi-sensor data fusion based on parameter estimation[J]. Sensor technology, 1995, 6: 32-36
[48] MINOTTO V P, JUNG C R, LEE B. Multimodal multi-channel on-line speaker diarization using sensor fusion through SVM[J]. IEEE transactions on multimedia, 2015, 17(10): 1694-1705.
[49] 张保梅. 数据级与特征级上的数据融合方法研究[D]. 兰州: 兰州理工大学, 2005.
ZHANG Baomei. Research on data fusion methods at data level and feature level[D]. Lanzhou: Lanzhou University of Technology, 2005.
[50] PORIA S, CHATURVEDI I, CAMBRIA E, et al. Convolutional MKL based multimodal emotion recognition and sentiment analysis[C]//Poceedings of the 2016 IEEE 16th International Conference on Data Mining (ICDM). Barcelona, Spain, 2016: 439-448.
[51] HAGHIGHAT M, ABDELMOTTALEB M, ALHALABI W. Discriminant correlation analysis: real-time feature level fusion for multimodal biometric recognition[J]. IEEE transactions on information forensics and security, 2016, 11(9): 1984-1996.
[52] EMERICH S, LUPU E, APATEAN A. Bimodal approach in emotion recognition using speech and facial expressions[C]//Poceedings of the 2009 International Symposium on Signals, Circuits and Systems. Iasi, Romania, 2009: 1-4.
[53] YAN J, ZHENG W, XU Q, et al. Sparse kernel reduced-rank regression for bimodal emotion recognition from facial expression and speech[J]. IEEE transactions on multimedia, 2016, 18(7): 1319-1329.
[54] MANSOORIZADEH M, CHARKARI N M. Multimodal information fusion application to human emotion recognition from face and speech[J]. Multimedia tools and applications, 2010, 49(2): 277-297.
[55] ZHALEHPOUR S, ONDER O, AKHTAR Z, et al. BAUM-1: A spontaneous audio-visual face database of affective and mental states[J]. IEEE transactions on affective computing, 2016, 8(3): 300-313.
[56] WU P, LIU H, LI X, et al. A novel lip descriptor for audio-visual keyword spotting based on adaptive decision fusion[J]. IEEE transactions on multimedia, 2016, 18(3): 326-338.
[57] GUNES H, PICCARDI M. Bi-modal emotion recognition from expressive face and body gestures[J]. Journal of network and computer applications, 2007, 30(4): 1334-1345.
[58] KOELSTRA S, PATRAS I. Fusion of facial expressions and EEG for implicit affective tagging[J]. Image and vision computing, 2013, 31(2): 164-174.
[59] SOLEYMANI M, ASGHARIESFEDEN S, PANTIC M, et al. Continuous emotion detection using EEG signals and facial expressions[C]//Poceedings of the 2014 IEEE International Conference on Multimedia and Expo. Chengdu, China, 2014: 1-6.
[60] PONTI JR M P. Combining classifiers: from the creation of ensembles to the decision fusion[C]//Poceedings of the 2011 24th SIBGRAPI Conference on Graphics, Patterns, and Images Tutorials. Alagoas, Brazil, 2011: 1-10.
[61] FREUND Y, SCHAPIRE R E. Experiments with a new boosting algorithm[C]//Poceedings of the 1996 International Conference on Machine Learning. Bari, Italy, 1996: 148-156.
[62] CHANG Z, LIAO X, LIU Y, et al. Research of decision fusion for multi-source remote-sensing satellite information based on SVMs and DS evidence theory[C]//Poceedings of the Fourth International Workshop on Advanced Computational Intelligence. Wuhan, China, 2011: 416-420.
[63] NEFIAN A V, LIANG L, PI X, et al. Dynamic bayesian networks for audio-visual speech recognition[J]. EURASIP journal on advances in signal processing, 2002, 2002(11): 783042.
[64] MUROFUSHI T, SUGENO M. An interpretation of fuzzy measures and the Choquet integral as an integral with respect to a fuzzy measure[J]. Fuzzy sets and systems, 1989, 29(2): 201-227.
[65] HUANG Y, YANG J, LIU S, et al. Combining facial expressions and electroencephalography to enhance emotion recognition[J]. Future internet, 2019, 11(5): 105.
[66] LU Y, ZHENG W, LI B, et al. Combining eye movements and EEG to enhance emotion recognition[C]//Poceedings of the Twenty-fourth International Joint Conference on Artificial Intelligence. Buenos Aires, Argentina, 2015: 1170-1176.
[67] ZHANG S, ZHANG S, HUANG T, et al. Learning affective features with a hybrid deep model for audio-visual emotion recognition[J]. IEEE transactions on circuits & systems for video technology, 2017, 28(10): 1-1.
[68] METALLINOU A, WOLLMER M, KATSAMANIS A, et al. Context-sensitive learning for enhanced audiovisual emotion classification[J]. IEEE transactions on affective computing, 2012, 3(2): 184-198.
[69] MCGURK H, MACDONALD J. Hearing lips and seeing voices[J]. Nature, 1976, 264(5588): 746.
[70] NGUYEN D, NGUYEN K, SRIDHARAN S, et al. Deep spatio-temporal feature fusion with compact bilinear pooling for multimodal emotion recognition[J]. Computer vision and image understanding, 2018, 174: 33-42.
[71] DOBRI?EK S, GAJ?EK R, MIHELI? F, et al. Towards efficient multi-modal emotion recognition[J]. International journal of advanced robotic systems, 2013, 10(1): 53.
[72] ZHANG S, ZHANG S, HUANG T, et al. Learning affective features with a hybrid deep model for audio-visual emotion recognition[J]. IEEE transactions on circuits and systems for video technology, 2017, 28(10): 3030-3043.
[73] TANG H, LIU W, ZHENG W, et al. Multimodal emotion recognition using deep neural networks[C]//Poceedings of the International Conference on Neural Information Processing. Guangzhou, China, 2017: 811-819.
[74] YIN Z, ZHAO M, WANG Y, et al. Recognition of emotions using multimodal physiological signals and an ensemble deep learning model[J]. Computer methods and programs in biomedicine, 2017, 140: 93-110.
[75] SOLEYMANI M, PANTIC M, PUN T. Multimodal emotion recognition in response to videos[J]. IEEE transactions on affective computing, 2011, 3(2): 211-223.
[76] JAMES W. What is an Emotion?[J]. Mind, 1884, 9(34): 188-205.
[77] COWIE R, DOUGLASCOWIE E. Automatic statistical analysis of the signal and prosodic signs of emotion in speech[C]//Poceedings of the Fourth International Conference on Spoken Language Processing ICSLP’96. Philadelphia, USA, 1996: 1989-1992.
[78] SCHERER, KLAUS. Adding the affective dimension: a new look in speech analysis and synthesis[C]//Poceedings of the ICSLP. Dayton, USA, 1996.
[79] PANTIC M, ROTHKRANTZ L J. Automatic analysis of facial expressions: the state of the art[J]. IEEE transactions on pattern analysis & machine intelligence, 2000, 12: 1424-1445.
[80] IOANNOU S V, RAOUZAIOU A T, TZOUVARAS V A, et al. Emotion recognition through facial expression analysis based on a neurofuzzy network[J]. Neural networks, 2005, 18(4): 423-435.
[81] CASTELLANO G, VILLALBA S D, CAMURRI A. Recognising human emotions from body movement and gesture dynamics[C]//Poceedings of the International Conference on Affective Computing and Intelligent Interaction. Lisbon, Portugal, 2007: 71-82.
[82] CAMURRI A, LAGERL?F I, VOLPE G. Recognizing emotion from dance movement: comparison of spectator recognition and automated techniques[J]. International journal of human-computer studies, 2003, 59(1-2): 213-225.
[83] KALIOUBY, ROBINSON P. Generalization of a vision-based computational model of mind-reading[C]//Poceedings of the International Conference on Affective Computing and Intelligent Interaction. Beijing, China, 2005: 582-589.
[84] CASTELLANO G, KESSOUS L, CARIDAKIS G. Emotion recognition through multiple modalities: face, body gesture, speech[M]. Berlin: Springer-verlag. 2008: 92-103.
[85] SCHERER K R, ELLGRING H. Multimodal expression of emotion: affect programs or componential appraisal patterns?[J]. Emotion, 2007, 7(1): 158-171.
[86] PETRANTONAKIS P C, HADJILEONTIADIS L J. A novel emotion elicitation index using frontal brain asymmetry for enhanced EEG-based emotion recognition[J]. IEEE transactions on information technology in biomedicine, 2011, 15(5): 737-746.
[87] LIN Y, WANG C, JUNG T, et al. EEG-based emotion recognition in music listening[J]. IEEE transactions on biomedical engineering, 2010, 57(7): 1798-1806.
[88] DAVIDSON R J, FOX N A. Asymmetrical brain activity discriminates between positive and negative affective stimuli in human infants[J]. Science, 1982, 218(4578): 1235-1237.
[89] TURETSKY B I, KOHLER C G, INDERSMITTEN T, et al. Facial emotion recognition in schizophrenia: when and why does it go awry?[J]. Schizophrenia research, 2007, 94(1-3): 253-263.
[90] HAJCAK G, MACNAMARA A, OLVET D M. Event-related potentials, emotion, and emotion regulation: an integrative review[J]. Developmental neuropsychology, 2010, 35(2): 129-155.
[91] ALARCAO S M, FONSECA M J. Emotions recognition using EEG signals: A survey[J]. IEEE transactions on affective computing, 2019, 10(3): 374-393.
[92] CHANEL G, KIERKELS J J M, SOLEYMANI M, et al. Short-term emotion assessment in a recall paradigm[J]. International journal of human-computer studies, 2009, 67(8): 607-627.
[93] CHEN S, ZHEN G, WANG S. Emotion recognition from peripheral physiological signals enhanced by EEG[C]//Poceedings of the IEEE International Conference on Acoustics. Shanghai, China, 2016.
[94] EKMAN P. An argument for basic emotions[J]. Cognition & emotion, 1992, 6(3-4): 169-200.
[95] HUANG Y, YANG J, LIAO P, et al. Fusion of facial expressions and EEG for multimodal emotion recognition[J]. Computational intelligence and neuroscience, 2017: 2107451.
[96] KAPOOR A, BURLESON W, PICARD R W. Automatic prediction of frustration[J]. International journal of human-computer studies, 2007, 65(8): 724-736.
[97] LIU W, ZHENG W, LU B. Emotion recognition using multimodal deep learning[M]. Berlin: Springer International Publishing, 2016: 521-529.
[98] LüHMANN A V, WABNITZ H, SANDER T, et al. M3BA: A mobile, modular, multimodal biosignal acquisition architecture for miniaturized EEG-NIRS based hybrid BCI and monitoring[J]. IEEE transactions on biomedical engineering, 2016, 64(6): 1199-1210.
[99] AREVALILLO-HERRáEZ M, COBOS M, ROGER S, et al. Combining inter-subject modeling with a subject-based data transformation to improve affect recognition from EEG signals[J]. Sensors, 2019, 19(13): 2999.
[100] 郭琛, 高小榕. 用于眼动检测和脑电采集的数据同步方法[C]// 第九届全国信息获取与处理学术会议论文集Ⅱ. 丹东, 中国. 2011.
GUO Chen, GAO Xiaorong. A data synchronization method for eye movement detection and eeg acquisition[C]// Proceedings of the 9th National Conference on Information Acquisition and Processing Ⅱ. Dandong, China. 2011.
[101] 赵亮. 多模态数据融合算法研究[D]. 大连: 大连理工大学, 2018.
ZHAO Liang. Multi-modal data fusion algorithm research[D]. Dalian: Dalian University of Technology, 2018.
[102] ZHENG W, ZHU J, PENG Y, et al. EEG-based emotion classification using deep belief networks[C]//Poceedings of the 2014 IEEE International Conference on Multimedia and Expo (ICME). Chengdu, China, 2014: 1-6.
[103] MOWER E, MATARIC M J, NARAYANAN S. A framework for automatic human emotion classification using emotion profiles[J]. IEEE transactions on audio, speech, and language processing, 2010, 19(5): 1057-1070.

备注/Memo

备注/Memo:
收稿日期:2020-01-30。
基金项目:国家自然科学基金面上项目(61876067);广东省自然科学基金面上项目(2019A1515011375);广州市科技计划项目重点领域研发计划项目(202007030005)
作者简介:潘家辉,副教授,博士,广东医学会数字医学分会常务委员,主要研究方向为机器学习、脑机接口、模式识别与智能系统。广州市珠江科技新星,华南师范大学教学名师,曾两次获得广东省科学技术奖一等奖、中华医学科技奖三等奖等。主持国家自然科学基金项目2项、广东省自然科学基金项目2项、广州市重点研发领域项目1项、广州市科技创新人才项目1项。发表学术论文80余篇;何志鹏,硕士研究生,主要研究方向为情感计算、混合脑机接口;李自娜,硕士研究生,主要研究方向为机器学习、情感识别
通讯作者:潘家辉.E-mail:panjh82@qq.com
更新日期/Last Update: 2020-07-25