[1]张昭昭,乔俊飞,杨刚.自适应前馈神经网络结构优化设计[J].智能系统学报,2011,6(4):312-317.
ZHANG Zhaozhao,QIAO Junfei,YANG Gang.An adaptive algorithm for designingoptimal feedforward neural network architecture[J].CAAI Transactions on Intelligent Systems,2011,6(4):312-317.
点击复制
《智能系统学报》[ISSN 1673-4785/CN 23-1538/TP] 卷:
6
期数:
2011年第4期
页码:
312-317
栏目:
学术论文—机器学习
出版日期:
2011-08-25
- Title:
-
An adaptive algorithm for designingoptimal feedforward neural network architecture
- 文章编号:
-
1673-4785(2011)04-0312-06
- 作者:
-
张昭昭1,2,乔俊飞1,杨刚1
-
1.北京工业大学 电子信息与控制工程学院,北京100124;
2.辽宁工程技术大学 电子与信息工程学院, 辽宁 葫芦岛 125105
- Author(s):
-
ZHANG Zhaozhao1,2, QIAO Junfei1, YANG Gang1
-
1. College of Electronic and Control Engineering, Beijing University of Technology, Beijing 100124, China;
2. Institute of Electronic and Information Engineering, Liaoning Technical University, Huludao 125105, China
-
- 关键词:
-
前馈神经网络; 结构设计; 自适应搜索策略; 互信息
- Keywords:
-
feedforward neural network; architecture design; adaptive search strategy; mutual information
- 分类号:
-
TP273
- 文献标志码:
-
A
- 摘要:
-
针对多数前馈神经网络结构设计算法采取贪婪搜索策略而易陷入局部最优结构的问题,提出一种自适应前馈神经网络结构设计算法.该算法在网络训练过程中采取自适应寻优策略合并和分裂隐节点,达到设计最优神经网络结构的目的.在合并操作中,以互信息为准则对输出线性相关的隐节点进行合并;在分裂操作中,引入变异系数,有助于跳出局部最优网络结构.算法将合并和分裂操作之后的权值调整与网络对样本的学习过程结合,减少了网络对样本的学习次数,提高了网络的学习速度,增强了网络的泛化性能.非线性函数逼近结果表明, 所提算法能得到更小的检测误差,最终网络结构紧凑.
- Abstract:
-
Due to the fact that most algorithms use a greedy strategy in designing artificial neural networks which are susceptible to becoming trapped at the architectural local optimal point, an adaptive algorithm for designing an optimal feedforward neural network was proposed. During the training process of the neural network, the adaptive optimization strategy was adopted to merge and split the hidden unit to design optimal neural network architecture. In the merge operation, the hidden units were merged based on mutual information criterion. In the split operation, a mutation coefficient was introduced to help jump out of locally optimal network. The process of adjusting the connection weight after merge and split operations was combined with the process of training the neural network. Therefore, the number of training samples was reduced, the training speed was increased, and the generalization performance was improved. The results of approximating nonlinear functions show that the proposed algorithm can limit testing errors and a compact neural network structure.
备注/Memo
收稿日期:
基金项目:国家自然科学基金资助项目(60873043);国家“863”计划资助项目(2009AA04Z155);北京市自然科学基金资助项目(4092010);教育部博士点基金资助项目(200800050004).
通信作者:张昭昭.E-mail:zzzhao123@126.com.
作者简介:
张昭昭,男,1973年生,博士研究生,主要研究方向为智能系统与智能信息处理、神经网络结构设计与优化.
乔俊飞,男,1968年生,教授,博士生导师,主要研究方向为复杂过程建模与控制、计算智能与智能优化控制,发表学术论文100余篇,其中被SCI、EI检索60余篇. 杨刚,男,1983年生,博士研究生,主要研究方向为神经计算与智能优化控制.
更新日期/Last Update:
2011-09-30