[1]乔俊飞,李凡军,杨翠丽.随机权神经网络研究现状与展望[J].智能系统学报,2016,11(6):758-767.[doi:10.11992/tis.201612015]
 QIAO Junfei,LI Fanjun,YANG Cuili.Review and prospect on neural networks with random weights[J].CAAI Transactions on Intelligent Systems,2016,11(6):758-767.[doi:10.11992/tis.201612015]
点击复制

随机权神经网络研究现状与展望(/HTML)
分享到:

《智能系统学报》[ISSN:1673-4785/CN:23-1538/TP]

卷:
第11卷
期数:
2016年6期
页码:
758-767
栏目:
出版日期:
2017-01-20

文章信息/Info

Title:
Review and prospect on neural networks with random weights
作者:
乔俊飞13 李凡军2 杨翠丽13
1. 北京工业大学 信息学部, 北京 100124;
2. 济南大学 数学科学学院, 山东 济南 250022;
3. 计算智能与智能系统北京市重点实验室, 北京 100124
Author(s):
QIAO Junfei13 LI Fanjun2 YANG Cuili13
1. Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China;
2. School of Mathematical Science, University of Jinan, Jinan 250022, China;
3. Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing 100124, China
关键词:
随机权神经网络前馈神经网络递归神经网络级联神经网络随机学习算法
Keywords:
neural network with random weightsfeedforward neural networkrecurrent neural networkcascade neural networkrandomized learning algorithm
分类号:
TP183
DOI:
10.11992/tis.201612015
摘要:
神经网络随机学习克服了传统梯度类算法所固有的收敛速度慢及局部极小问题,最近已成为神经网络领域的研究热点之一。基于随机学习的思想,人们设计了不同结构的随机权神经网络模型。本文旨在回顾总结随机权神经网络的研究现状基础上,给出其发展趋势。首先,提出随机权神经网络简化模型,并基于简化模型给出神经网络随机学习算法;其次,回顾总结随机权神经网络研究现状,基于简化模型分析不同结构随机权神经网络的性能及随机权初始化方法;最后,给出随机权神经网络今后的发展趋势。
Abstract:
A randomized learning algorithm in a neural network, which can overcome the difficulty of slow convergence and local minimum inherently in the traditional gradient-based learning algorithms, has recently become a hot topic in the field of neural networks. Some neural networks with random weights using randomized learning algorithms have been proposed. The aim of this paper summarizes the current research on neural networks with random weights and provides some views about its development trends. First, a simplified model of a neural network with random weights was proposed, and the randomized learning algorithm was summarized, based on the simplified model. Then, a review on neural networks with random weights was given, and the performance of several different neural networks with random weights was analyzed, based on the simplified model. Finally, several views on neural networks with random weights are presented.

参考文献/References:

[1] QI Min, ZHANG G P. Trend time series modeling and forecasting with neural networks[J]. IEEE transactions on neural networks, 2008, 19(5):808-816.
[2] 赵洪伟, 谢永芳, 蒋朝辉, 等. 基于泡沫图像特征的浮选槽液位智能优化设定方法[J]. 自动化学报, 2014, 40(6):1086-1097. ZHAO Hongwei, XIE Yongfang, JIANG Chaohui, et al. An intelligent optimal setting approach based on froth features for level of flotation cells[J]. Acta automatica sinica, 2014, 40(6):1086-1097.
[3] 乔俊飞, 薄迎春, 韩广. 基于ESN的多指标DHP控制策略在污水处理过程中的应用[J]. 自动化学报, 2013, 39(7):1146-1151. QIAO Junfei, BO Yingchun, HAN Guang. Application of esn-based multi indices dual heuristic dynamic programming on wastewater treatment process[J]. Acta automatica sinica, 2013, 39(7):1146-1151.
[4] CHENG M H, HWANG K S, JENG J H, et al. Classification-based video super-resolution using artificial neural networks[J]. Signal processing, 2013, 93(9):2612-2625.
[5] SCARDAPANE S, WANG Dianhui, PANELLA M. A decentralized training algorithm for echo state networks in distributed big data applications[J]. Neural networks, 2015, 78:65-74.
[6] TENG T H, TAN A H, ZURADA J M. Self-organizing neural networks integrating domain knowledge and reinforcement learning[J]. IEEE transactions on neural networks & learning systems, 2015, 26(5):889-902.
[7] ALHAMDOOSH M, WANG Dianhui. Fast decorrelated neural network ensembles with random weights[J]. Information sciences, 2014, 264:104-117.
[8] GUSTAVSSON A, MAGNUSON A, BLOMBERG B, et al. On the difficulty of training recurrent neural networks[J]. Computer science, 2013, 52(3):337-345.
[9] RIGOTTI M, BARAK O, WARDEN M R, et al. The importance of mixed selectivity in complex cognitive tasks[J]. Nature, 2013, 497(7451):585-590.
[10] BARAK O, RIGOTTI M, FUSI S. The sparseness of mixed selectivity neurons controls the generalization-discrimination trade-off[J]. The journal of neuroscience, 2013, 33(9):3844-3856.
[11] HUANG Gao, HUANG Guangbin, SONG Shiji, et al. Trends in extreme learning machines:a review[J]. Neural networks, 2015, 61:32-48.
[12] DENG Chenwei, HUANG Guangbin, XU Jia, et al. Extreme learning machines:new trends and applications[J]. Science China information sciences, 2015, 58(2):1-16.
[13] PAO Y H, TAKEFUJI Y. Functional-link net computing:theory, system architecture, and functionalities[J]. Computer, 1992, 25(5):76-79.
[14] HUANG Guangbin, ZHU Qinyu, SIEW C K. Extreme learning machine:theory and applications[J]. Neurocomputing, 2006, 70(1/2/3):489-501.
[15] RAHIMI A, RECHT B. Weighted sums of random kitchen sinks:replacing minimization with randomization in learning[C]//Proceedings of the 21st International Conference on Neural Information Processing Systems. Vancouver, British Columbia, Canada:Curran Associates Inc., 2008:1313-1320.
[16] WIDROW B, GREENBLATT A, KIM Y, et al. The No-Prop algorithm:A new learning algorithm for multilayer neural networks[J]. Neural networks, 2013, 37:182-188.
[17] KASUN L L C, ZHOU Hongming, HUANG Guangbin, et al. Representational learning with ELMs for big data[J]. IEEE intelligent systems, 2013, 28(6):31-34.
[18] TANG Jiexiong, DENG Chenwei, HUANG Guangbin. Extreme learning machine for multilayer perceptron[J]. IEEE transactions on neural networks and learning systems, 2016, 27(4):809-821.
[19] QIAO Junfei, LI Fanjun, HAN Honggui, et al. Constructive algorithm for fully connected cascade feedforward neural networks[J]. Neurocomputing, 2016, 182:154-164.
[20] WAN Yihe, SONG Shiji, HUANG Gao. Incremental extreme learning machine based on cascade neural networks[C]//IEEE International Conference on Systems, Man, and Cybernetics. Kowloon:IEEE, 2015:1889-1894.
[21] STEIL J J. Memory in backpropagation-decorrelation O(N) efficient online recurrent learning[C]//Proceedings of the 15th International Conference on Artificial Neural Networks:Formal Models and Their Applications. Berlin Heidelberg:Springer, 2005:750-750.
[22] HERBERT J. The "echo state" approach to analyzing and training recurrent neural networks with an erratum note[R]. Bonn, Germany:German National Research Center for Information Technology, 2001.
[23] MAASS W. Liquid state machines:motivation, theory, and applications[J]. Computability in context, 2010:275-296. DOI:10.1142/9781848162778_0008.
[24] LIANG Nanying, SARATCHANDRAN P, HUANG Guangbin, et al. Classification of mental tasks from EEG signals using extreme learning machine[J]. International journal of neural systems, 2006, 16(1):29-38.
[25] SKOWRONSKI M D, HARRIS J G. Automatic speech recognition using a predictive echo state network classifier[J]. Neural networks, 2007, 20(3):414-423.
[26] LI Decai, HAN Min, WANG Jun. Chaotic time series prediction based on a novel robust echo state network[J]. IEEE transactions on neural networks and learning systems, 2012, 23(5):787-799.
[27] SCHMIDT W F, KRAAIJVELD M A, DUIN R P W. Feedforward neural networks with random weights[C]//Proceedings of 11th International Conference on Pattern Recognition Methodology and Systems. Hague, Holland:IEEE, 1992:1-4.
[28] IGELNIK B, PAO Y H. Stochastic choice of basis functions in adaptive function approximation and the Functional-link neural net[J]. IEEE transactions on neural networks, 1995, 6(6):1320-1329.
[29] HUANG Guangbin. An insight into extreme learning machines:random neurons, random features and kernels[J]. Cognitive computation, 2014, 6(3):376-390.
[30] HUANG Guangbin, ZHU Qinyu, SIEW C K. Extreme learning machine:theory and applications[J]. Neurocomputing, 2006, 70(1/2/3):489-501.
[31] HUANG G B, CHEN L, SIEW C K. Universal approximation using incremental constructive feedforward networks with random hidden nodes[J]. IEEE transactions on neural networks, 2006, 17(4):879-92.
[32] LIU Xia, LIN Shaobo, FANG Jian, et al. Is extreme learning machine feasible? a theoretical assessment (Part I)[J]. IEEE transactions on neural networks and learning systems, 2014, 26(1):7-20.
[33] DEHURI S, CHO S B. A comprehensive survey on functional link neural networks and an adaptive PSO-BP learning for CFLNN[J]. Neural computing and applications, 2010, 19(2):187-205.
[34] ZHANG Le, SUGANTHAN P N. A comprehensive evaluation of random vector functional link networks[J]. Information sciences, 2015, 367/368:1094-1105.
[35] LIANG Nanying, HUANG Guangbin, SARATCHANDRAN P, et al. A fast and accurate online sequential learning algorithm for feedforward networks[J]. IEEE transactions on neural networks, 2006, 17(6):1411-1423.
[36] SCARDAPANE S, WANG Dianhui, PANELLA M, et al. Distributed learning for random vector functional-link networks[J]. Information sciences, 2015, 301:271-284.
[37] ALHAMDOOSH M, WANG Dianhui. Fast decorrelated neural network ensembles with random weights[J]. Information sciences, 2014, 264:104-117.
[38] LI Ying. Orthogonal incremental extreme learning machine for regression and multiclass classification[J]. Neural computing and applications, 2014, 27(1):111-120.
[39] 李凡军, 乔俊飞, 韩红桂. 网络结构增长的极端学习机算法[J]. 控制理论与应用, 2014, 31(5):638-643. LI Fanjun, QIAO Junfei, HAN Honghui. Incremental constructive extreme learning machine[J]. Control theory & applications, 2014, 31(5):638-643.
[40] 李凡军, 韩红桂, 乔俊飞. 基于灵敏度分析法的ELM剪枝算法[J]. 控制与决策, 2014, 29(6):1003-1008. LI Fanjun, HAN Honghui, QIAO Junfei. Pruning algorithm for extreme learning machine based on sensitivity analysis[J]. Control and Decision, 2014, 29(6):1003-1008.
[41] FENG Guorui, HUANG Guangbin, LIN Qingping, et al. Error minimized extreme learning machine with growth of hidden nodes and incremental learning[J]. IEEE transactions on neural networks, 2009, 20(8):1352-1357.
[42] MICHE Y, SORJAMAA A, BAS P, et al. OP-ELM:optimally pruned extreme learning machine[J]. IEEE transactions on neural networks, 2010, 21(1):158-162.
[43] 韩敏, 李德才. 基于替代函数及贝叶斯框架的1范数ELM算法[J]. 自动化学报, 2011, 37(11):1344-1350. HAN Min, LI Decai. An norm 1 regularization term elm algorithm based on surrogate function and bayesian framework[J]. Acta automatica sinica, 2011, 37(11):1344-1350.
[44] MICHE Y, VAN HEESWIJK M, BAS P, et al. TROP-ELM:a double-regularized ELM using LARS and Tikhonov regularization[J]. Neurocomputing, 2011, 74(16):2413-2421.
[45] MICHE Y, AKUSOK A, VEGANZONES D, et al. SOM-ELM-Self-organized clustering using ELM[J]. Neurocomputing, 2015, 165:238-254.
[46] HUANG Gao, LIU Tianchi, YANG Yan, et al. Discriminative clustering via extreme learning machine[J]. Neural networks, 2015, 70:1-8.
[47] ISLAM M M, SATTAR M A, AMIN M F, et al. A new constructive algorithm for architectural and functional adaptation of artificial neural networks[J]. IEEE transactions on systems, man, and cybernetics, part B (cybernetics), 2009, 39(6):1590-1605.
[48] SCHMIDHUBER J. Deep learning in neural networks:an overview[J]. Neural networks, 2014, 61:85-117.
[49] HAN Honggui, WANG Lidan, QIAO Junfei. Hierarchical extreme learning machine for feedforward neural network[J]. Neurocomputing, 2014, 128:128-135.
[50] QU B Y, LANG B F, LIANG J J, et al. Two-hidden-layer extreme learning machine for regression and classification[J]. Neurocomputing, 2016, 175:826-834.
[51] ZHANG Jian, DING Shifei, ZHANG Nan, et al. Incremental extreme learning machine based on deep feature embedded[J]. International journal of machine learning and cybernetics, 2016, 7(1):111-120.
[52] QU B Y, LANG B F, LIANG J J, et al. Two-hidden-layer extreme learning machine for regression and classification[J]. Neurocomputing, 2016, 175:826-834.
[53] ZHANG Nan, DING Shifei, ZHANG Jian. Multi layer ELM-RBF for multi-label learning[J]. Applied soft computing, 2016, 43:535-545.
[54] HUNTER D, YU Hao, PUKISH M S, et al. Selection of proper neural network sizes and architectures-a comparative study[J]. IEEE transactions on industrial informatics, 2012, 8(2):228-240.
[55] WILAMOWSKI B M. Challenges in applications of computational intelligence in industrial electronics[C]//Proceedings of 2010 IEEE International Symposium on Industrial Electronics. Bari:IEEE, 2010:15-22.
[56] WILAMOWSKI B M, COTTON N J, KAYNAK O, et al. Computing gradient vector and jacobian matrix in arbitrarily connected neural networks[J]. IEEE transactions on industrial electronics, 2008, 55(10):3784-3790.
[57] OAKDEN A. Cascade networks and extreme learning machines[D]. Canberra:Australian National University, 2014.
[58] LI Fanjun, QIAO Junfei, HAN Honggui, et al. A self-organizing cascade neural network with random weights for nonlinear system modeling[J]. Applied soft computing, 2016, 42:184-193.
[59] JAEGER H, HAAS H. Harnessing nonlinearity:predicting chaotic systems and saving energy in wireless communication[J]. Science, 2004, 304(5667):78-80.
[60] SKOWRONSKI M D, HARRIS J G. Automatic speech recognition using a predictive echo state network classifier[J]. Neural networks, 2007, 20(3):414-423.
[61] XIA Yili, JELFS B, VAN HULLE M M, et al. An augmented echo state network for nonlinear adaptive filtering of complex noncircular signals[J]. IEEE transactions on neural networks, 2011, 22(1):74-83.
[62] LUKOSEVICIUS M, JAEGER H. Reservoir computing approaches to recurrent neural network training[J]. Computer science review, 2009, 3(3):127-149.
[63] 彭宇, 王建民, 彭喜元. 储备池计算概述[J]. 电子学报, 2011, 39(10):2387-2396. PENG Yu, WANG Jianmin, PENG Xiyuan. Survey on reservoir computing[J]. Acta electronica sinica, 2011, 39(10):2387-2396.
[64] QIAO Junfei, LI Fanjun, HAN Honggui, et al. Growing echo-state network with multiple subreservoirs[J]. IEEE transactions on neural networks and learning systems, 2016, 99:1-14.
[65] DENG Zhidong, ZHANG Yi. Collective behavior of a small-world recurrent neural system with scale-free distribution[J]. IEEE transactions on neural networks, 2007, 18(5):1364-1375.
[66] STRAUSS T, WUSTLICH W, LABAHN R. Design strategies for weight matrices of echo state networks[J]. Neural computation, 2012, 24(12):3246-3276.
[67] XUE Yanbo, YANG Le, HAYKIN S. Decoupled echo state networks with lateral inhibition[J]. Neural networks, 2007, 20(3):365-376.
[68] NAJIBI E, ROSTAMI H. SCESN, SPESN, SWESN:three recurrent neural echo state networks with clustered reservoirs for prediction of nonlinear and chaotic time series[J]. Applied intelligence, 2015, 43(2):460-472.
[69] 薄迎春, 乔俊飞, 张昭昭. 一种具有small world特性的ESN结构分析与设计[J]. 控制与决策, 2012, 27(3):383-388. BO Yingchun, QIAO Junfei, ZHANG Zhaozhao. Analysis and design on structure of small world property ESN[J]. Control and decision, 2012, 27(3):383-388.
[70] 李凡军, 乔俊飞. 一种增量式模块化回声状态网络[J]. 控制与决策, 2016, 31(8):1481-1486. LI Fanjun, QIAO Junfei. An incremental modular echo state network[J]. Control and decision, 2016, 31(8):1481-1486.
[71] SCHRAUWEN B, VERSTRAETEN D, VAN CAMPENHOUT J M. An overview of reservoir computing:Theory, applications and implementations[C]//Proceedings of the 15th European Symposium on Artificial Neural Networks. Bruges, Belgium, 2007:471-482.
[72] MAASS W, NATSCHLÃGER T, MARKRAM H. Real-time computing without stable states:a new framework for neural computation based on perturbations[J]. Neural computation, 2002, 14(11):2531-2560.
[73] LEGENSTEIN R, MAASS W. Edge of chaos and prediction of computational performance for neural circuit models.[J]. Neural networks, 2007, 20(3):323-334.
[74] BURGSTEINER H, KRÖLL M, LEOPOLD A, et al. Movement prediction from real-world images using a liquid state machine[J]. Applied intelligence, 2007, 26(2):99-109.
[75] 张冠元, 王斌. 一种基于液体状态机的音乐和弦序列识别方法[J]. 模式识别与人工智能, 2013, 26(7):643-647. ZHANG Guanyuan, WANG Bin. Liquid state machine based music chord sequence recognition algorithm[J]. Pattern recognition and artificial intelligence, 2013, 26(7):643-647.
[76] HUANG Guangbin, CHEN Lei. Enhanced random search based incremental extreme learning machine[J]. Neurocomputing, 2008, 71(16/17/18):3460-3468.
[77] WANG Yuguang, CAO Feilong, YUAN Yubo. A study on effectiveness of extreme learning machine[J]. Neurocomputing, 2011, 74(16):2483-2490.
[78] CUI H, LIU X, LI L. The architecture of dynamic reservoir in the echo state network[J]. Chaos, 2012, 22(3):033127.
[79] OZTURK M C, XU Dongming. Analysis and design of echo state networks[J]. Neural computation, 2007, 19(1):111-138.
[80] DUAN Haibin, WANG Xiaohua. Echo state networks with orthogonal pigeon-inspired optimization for image restoration[J]. IEEE transactions on neural networks and learning systems, 2015, 27(11):2413-2425.
[81] BOCCATO L, ATTUX R, VON ZUBEN F J. Self-organization and lateral interaction in echo state network reservoirs[J]. Neurocomputing, 2014, 138:297-309.
[82] KORYAKIN D, LOHMANN J, BUTZ M V. Balanced echo state networks[J]. Neural networks the official journal of the international neural network, 2012, 36:35-45.
[83] YUENYONG S, NISHIHARA A. Evolutionary pre-training for CRJ-type reservoir of echo state networks[J]. Neurocomputing, 2015, 149:1324-1329.
[84] LI Xiumin, ZHONG Ling, XUE Fangzheng, et al. A priori data-driven multi-clustered reservoir generation algorithm for echo state network[J]. PLoS one, 2015, 10(4):e0120750.
[85] PALANGI H, DENG Li, WARD R K. Learning input and recurrent weight matrices in echo state networks[C]//Advances in Neural Information Processing Systems 22:Conference on Neural Information Processing Systems. Vancouver, British Columbia, Canada, 2009.

相似文献/References:

[1]张昭昭,乔俊飞,杨刚.自适应前馈神经网络结构优化设计[J].智能系统学报,2011,6(04):312.
 ZHANG Zhaozhao,QIAO Junfei,YANG Gang.An adaptive algorithm for designingoptimal feedforward neural network architecture[J].CAAI Transactions on Intelligent Systems,2011,6(6):312.

备注/Memo

备注/Memo:
收稿日期:2016-12-12。
基金项目:国家自然科学基金项目(61533002,61603012);北京市自然科学基金项目(Z141100001414005);北京市教委基金项目(km201410005001,KZ201410005002).
作者简介:乔俊飞,男,1968年生,教授,博士生导师,国家杰出青年基金获得者,教育部长江学者特聘教授,教育部新世纪优秀人才,中国人工智能学会科普工作委员会主任,主要研究方向为智能信息处理、智能控制理论与应用。获教育部科技进步奖一等奖和北京市科学技术奖三等奖各1项。发表学术论文100余篇,其中被SCI收录20余篇,EI收录60余篇,获得发明专利20余项;李凡军,男,1977年生,副教授,主要研究方向为智能系统与智能信息处理。发表学术论文10余篇,其中被SCI检索3篇,EI检索6篇;杨翠丽,女,1986年生,讲师,,主要研究方向为进化算法和智能信息处理。发表学术论文10余篇,其中被SCI检索7篇,EI检索12篇。
通讯作者:乔俊飞.E-mail:junfeiq@bjut.edu.cn.
更新日期/Last Update: 1900-01-01