[1]TIAN Qing,MAO Junxiang,CAO Meng.Research on the coupled-relationships self-learning human facial age estimation[J].CAAI Transactions on Intelligent Systems,2022,17(2):257-265.[doi:10.11992/tis.202101020]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
17
Number of periods:
2022 2
Page number:
257-265
Column:
学术论文—机器学习
Public date:
2022-03-05
- Title:
-
Research on the coupled-relationships self-learning human facial age estimation
- Author(s):
-
TIAN Qing1; 2; MAO Junxiang1; 3; CAO Meng1
-
1. School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing 210044, China;
2. Engineering Research Center of Digital Forensics, Ministry of Education, Nanjing University of Information Science & Technology, Nanjing 210044, China;
3. School of Computer Science and Engineering, Southeast University, Nanjing 210096, China
-
- Keywords:
-
human facial age estimation; coupled relationship; feature relationship; coding relationship; input-output relationship; relationships self-learning; alternating optimization; deep architecture
- CLC:
-
TP391
- DOI:
-
10.11992/tis.202101020
- Abstract:
-
Although a variety of human facial age estimation (AE) potential relationship-exploiting works have been proposed, most of them are limited to exploiting one-sided potential relationships, rarely considering multi-sided coupled relationships. Therefore, we propose a coupled relationships self-learning age estimation model—CRSAE, which can exploit three kinds of potential relationships, i.e., input feature relationships, output coding relationships, and input-output relationships, to improve the generalization of AE models. Specifically, the row and column covariance matrices of the projection matrix are modeled to construct the regularizer of the feature and coding relationships. The input-output relationships are then exploited through a structure matrix. To solve our proposed CRSAE model effectively, we present an alternating optimization algorithm. In view of the highly nonlinear characteristics of facial features, we also extend our proposed model with a deep architecture to further enhance its generalization. Finally, evaluation experiments are conducted to demonstrate the effectiveness and superiority of our proposed methods on multiple human facial datasets.