[1]LI Shanshan,ZHAO Qingjie,ZHU Wenlong,et al.Cross-domain knowledge generalization method introducing causal discovery learning[J].CAAI Transactions on Intelligent Systems,2025,20(4):1033-1045.[doi:10.11992/tis.202501005]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
20
Number of periods:
2025 4
Page number:
1033-1045
Column:
学术论文—机器学习
Public date:
2025-08-05
- Title:
-
Cross-domain knowledge generalization method introducing causal discovery learning
- Author(s):
-
LI Shanshan1; 2; ZHAO Qingjie2; ZHU Wenlong1; RUAN Jinjia3; YU Tiejun1; MA Shaohui1; SUN Baosheng1
-
1. Beijing Jinghang Research Institute of Computing and Communication, Beijing 100074, China;
2. School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China;
3. China Waterborne Transport Research Institute, Beijing 100088, China
-
- Keywords:
-
transfer learning; domain generalization; image classification; causality; causal representation learning; variational inference; causal discovery; counterfactual contrastive
- CLC:
-
TP391
- DOI:
-
10.11992/tis.202501005
- Abstract:
-
Domain generalization aims to generalize knowledge from multiple known domains to unknown target domains. However, existing models are easily affected by high-dimensional noise when extracting image features, which causes the unstable relationship between the extracted image features and labels. Thus, inspired by the cross-domain invariant causal mechanism, we propose a cross-domain knowledge generalization method introducing causal discovery learning. Specifically, we extract the low-dimensional latent features of the image to retain the basic information of the image. Meanwhile, we perform variational inference on the low-dimensional latent features to achieve mutual independence of latent feature variables. We reconstruct the causal directed acyclic graphs (DAG) between latent feature variables and category labels to discover the latent feature variables that have stable causal structures with category labels. We introduce a counterfactual contrastive regularization term, which exploits counterfactual variance and invariance during data generation to make causal inference and generate causal invariant representations. To verify the proposed method, we conducted tests on five datasets under the DomainBed framework and four datasets under the SWAD framework. Experiments show that compared with existing methods, our domain generalization model has greater improvements in performance and adaptability.