[1]HE Huacan.Refining the interpretability of artificial intelligence[J].CAAI Transactions on Intelligent Systems,2019,14(3):393-412.[doi:10.11992/tis.201810020]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
14
Number of periods:
2019 3
Page number:
393-412
Column:
综述
Public date:
2019-05-05
- Title:
-
Refining the interpretability of artificial intelligence
- Author(s):
-
HE Huacan
-
School of Computer Science, Northwestern Polytechnical University, Xi’an 710072, China
-
- Keywords:
-
artificial intelligence; interpretability; evolution; uncertainty; universal logic; flexible propositional logic; flexible neurons; mathematical dialectic logic
- CLC:
-
TP18
- DOI:
-
10.11992/tis.201810020
- Abstract:
-
In view of the restrictions on the interpretability of artificial intelligence (AI) research on deep neural networks, it is indicated that rigid logic (mathematical formal logic) and binary neurons are equivalent. Moreover, a binary neural network can be converted into a logical expression, which is highly interpretable. The deep neural network blindly increases the number of intermediate layers to fit big data without the timely abstraction of data with the smallest granularity (atom) into knowledge with larger granularity (molecule), changes knowledge with smaller granularity into knowledge with larger granularity, and submerges the original strong explanatory power in the ocean of intermediate layers. To support knowledge processing of multiple granularities, rigid logic should be expanded into flexible propositional logic (proposition-level mathematical dialectic logic) and binary neurons should be expanded into flexible neurons to maintain the strong explanatory power. This paper introduces in detail the achievement of the expansion process from rigid logic to flexible logic and its application in AI research, which is the best method to recover the interpretability of AI.