[1]YAN He,LI Mengxue,ZHANG Yuning,et al.A ghost asymmetric residual attention network model for facial expression recognition[J].CAAI Transactions on Intelligent Systems,2023,18(2):333-340.[doi:10.11992/tis.202201003]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
18
Number of periods:
2023 2
Page number:
333-340
Column:
学术论文—机器感知与模式识别
Public date:
2023-05-05
- Title:
-
A ghost asymmetric residual attention network model for facial expression recognition
- Author(s):
-
YAN He; LI Mengxue; ZHANG Yuning; LIU Jianqi
-
School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China
-
- Keywords:
-
expression recognition; feature extraction; ResNet50; Ghost module; Mish; asymmetric residual attention; depthwise separable convolution; deep learning
- CLC:
-
TP391
- DOI:
-
10.11992/tis.202201003
- Abstract:
-
In this paper, a solution is proposed to address the low accuracy in facial expression recognition that results from the 1×1 convolution dimensionality reduction of the Bottleneck in ResNet50. To do so, the authors introduce the Ghost module and depth separable convolution to replace the 1×1 and 3×3 convolutions in the Bottleneck, respectively, in order to preserve more of the original feature information and improve the feature extraction ability of the trunk branch. The Mish activation function is also used to replace the ReLU activation function in the Bottleneck, further enhancing the accuracy of facial expression recognition. To further improve the ability of the model to express important information, the authors also introduce an asymmetric residual attention block (ARABlock) between the improved Bottlenecks. The proposed method, which is referred to as the ghost asymmetric residual attention network (GARAN) model, shows high recognition accuracy on the FER2013 and CK+ facial expression datasets based on comparative experimental results.