[1]LIU Wei,LIU Shang,BAI Runcai,et al.Reducing training times in neural network classifiers by using dynamic data reduction[J].CAAI Transactions on Intelligent Systems,2017,12(2):2258-2265.[doi:10.11992/tis.201605031]
Copy
CAAI Transactions on Intelligent Systems[ISSN 1673-4785/CN 23-1538/TP] Volume:
12
Number of periods:
2017 2
Page number:
2258-2265
Column:
学术论文—机器学习
Public date:
2017-05-05
- Title:
-
Reducing training times in neural network classifiers by using dynamic data reduction
- Author(s):
-
LIU Wei1; LIU Shang1; BAI Runcai2; ZHOU Xuan1; ZHOU Dingning1
-
1. College of Science, Liaoning Technical University, Fuxin 123000, China;
2. Mining Institute, Liaoning Technical University, Fuxin 123000, China
-
- Keywords:
-
neural network; data reduction; classification boundary; sample weight; boundary sample; kernel sample
- CLC:
-
TP301.6
- DOI:
-
10.11992/tis.201605031
- Abstract:
-
In this paper, we present a neural network classifier training method based on dynamic data reduction (DDR) to address long training times and the poor generalization ability of neural network classifiers. In our approach, we assigned each sample a weight value, which was then dynamically updated based on the classification error rate at each iteration of the training sample. Subsequently, the training sample was reduced based on the weight of the sample so as to increase the proportion of boundary samples in error-prone classification environments and to reduce the role of redundant kernel samples. Our numerical experiments show that our neural network training method not only substantially shortens the training time of the given networks, but also significantly enhances the classification and generalization abilities of the network.